AMD has announced the FirePro S9170, its new flagship "server GPU." This new model comes with 32GB of GDDR5 memory onboard, double that of the previous FirePro S9150, on a 512-bit bus for a claimed 320GB/s of throughput. With all that GDDR5 memory, it's a pretty safe bet that Fiji is not a good candidate for this market, given its memory capacity restraints. The extra RAM is supposed to improve the performance of applications that need to keep large data sets as close to the GPU as possible.

The passively-cooled S9170 boasts theoretical performance ratings of 5.24 TFLOPS for single precision workloads and 2.62 TFLOPS for double precision calcuations, which is good for approximately a 3% improvement over the S9150. AMD only talks about raw performance numbers in the FirePro S9170's announcement and product listing, and does not disclose the number of compute units onboard. Based on the Newegg listing for the FirePro S9150, which has 2,816 compute units and only slighly lower performance ratings, it seems the new card may be based on the same Hawaii GPU as the consumer R9 290 and R9 390 series.

[deleted]
Apologies – don’t know how to totally terminate a post. That would have been me at my most self-hating worst. Sorry.
Probably ~ I should quit, here. I don’t know. My depression, anxiety it reeling up at me. I thought that an a anonymous self would help me. Seems rather not.
editing: people tend to hate me in rl: here seems to confirm that. I’m cutting myself… that seems to help a little.
Exactly. It’s passively cooled by the server definition of the term, which is a more than reasonable definition considering that’s the market. Though I can see how that might be confusing for folks who aren’t familiar with real glass room servers or how they get spec’d.
Thanks for the information! 😀
skimming 20 or so reviews still yet to see a 680 overclock under 7GBps, but, thank you for bringing it to my attention. 😀
Actually saw one site supposedly get an extra 700Mhz out of their VRAM for 8.8GBps. I’m a little dubious of that… Maybe they meant 700Mhz after double pump for an increase to a reasonable 7.4GBps.
So, the 680 –> 770 was a very minor respin and the 290/290x –> 390/390x is a minor respin. As, so far it appears to reach overclocks about 5% higher on air.
Indicating some small changes in the GPU core.
the 770 came with faster ram than the 680 too
not ALL 680’s have the fast ram
the 770 came with 7ghz ram but the 680 only came with 6ghz ram
so no you cannot flash EVERY 680 to a 770
You can flash EVERY single 680 to a 770. [I have been informed the RAM speed on the cards did change, in my short exploration of many reviews I’ve not seen any serious overclocks (a few where basically said “After raising core 200Mhz we raised RAM 50MHz for shits and giggles” or, basically, didn’t check to see how fast it rises) that fall under 7GBps. [b<]sorry for the mistake/bad memory/etc. My very bad[/b<].] Disregarding the RAM limit, not every 290/290x can hit 6Ghz effective GDDR5 clockspeeds. As a refrence, so far results on HWbot show 390x having just under 5% higher average overclock on air than 290x. the 770 is just over 1% higher clockspeed. Now, the 390x has only 27 submissions, so, time will tell. I expect it will maintain that about 5% (just over 50Hz) lead. Hawaii is fully unlocked voltage wise, so, changing the firmware won't change what a certain voltage can do. Care to explain why they overclock better? I'm guessing this is a "major" stepping change (Major is A --> B --> C and so forth, Minor is A0 --> A1 --> A2 and so forth) for the parts. Add in that not all review samples can hit 6GBps effective for VRAM, and, the concept that the 390x is a pure rebrand of the 290x is "killed" due to firmware changes not being able to effect 100% of cards. A rebrand means every single 290x could be made into 390x (disregarding RAM limit again) which is untrue.
people are flashing their 290’s and 290X’s to 390’s and 390X’s
[quote<]with pretty large card changes.[/quote<] oh and the board is exactly the same too [url<]http://www.techpowerup.com/213455/radeon-r9-390x-taken-apart-pcb-reveals-a-complete-re-brand.html[/url<]
every single 680 can be flashed to a 770 with BIOS change.
Nothing close to every 290/290x can be flashed to 390/390x ignoring the 4 v. 8GB RAM limit.
You don’t have to like it, but, the 770 was a flat out rebrand while the 390/390x are [b<]minor[/b<] respins.
Most server parts are “passively cooled” with a minimum airflow through the chassis as an installation requirement.
Actually many of the changes between the 290/290x -> 390/390x are the same changes between the GTX 680 -> GTX 770. nVidia altered some of their power management to enable higher clocks/better turbo. Though in fairness some of that could have simply been newer firmware (which the R9 390/390x also benefits from).
The idea of higher core clocks and memory clocks is also shared between each company’s respective generation of cards.
I know Matrox has been licensing AMD GPUs for some of their products but they haven’t supported a higher monitor count than what AMD does for the same chip. Matrox however has produced dual GPU cards using low end chips to go past 6 displays per card.
Yup, same guy that is usually seeing the school nurse for a bloody nose.
Actually, 8GB was just a side effect of the new GDDR5 packages, as discussed on the podcast (I think, or did I read it elsewhere?)
The old 2Gib GDDR5 packages for 4GB configurations required a higher driving voltage, meaning that the memory controllers could only be run at 5GHz. The newest 4Gib GDDR5 packages can be driven at lower voltages, meaning that they can be clocked faster for the same TDP, and likely better yields than the older 2Gib packages.
The result is that Hawaii needs 16 GDDR5 chips for its eight, dual 32-bit memory controllers and the options are basically using old, slower, hungrier 2Gib packages for a total of 32Gib (4GB) or newer faster more efficient 8Gib packages for a total of 64Gib (8GB).
You couldn’t get the benefits of the new package in a 4GB configuration without reducing the number of chips used from 16 to 8, and that would make it only a 256-bit bus and halve your bandwidth. That would be a total disaster!
huh. Is he the one who hides under the play-structure and shivers whenever any people wearing blue look in his direction?
290/290x –> 390/390x is not a rebrand.
It is a minor respin with pretty large card changes.
A rebrand was 680 to 770. If AMD had put 390/390x cards with 4GB of RAM and clocked it at 5Ghz still. With only changing clockspeeds.
Intel can package different RAM amounts on the package. they can sell a 4/8/16GB i3 and such for i5, i7, etc.
Intel also has a huge motive to get involved in somehow fabbing the memory. Their utilization is somewhere between 60-70% from what I’ve heard.
Dear chuckula,
We are still using GDDR5 instead of HBM because the Hawaii/Grenada chip does much better DP compute than Fiji. The Grenada chip is based on Hawaii and was designed for GDDR5, not HBM. Thank you for your concern chuckles.
We barely have a PR department and marketing is nearly nonexistent. However, we did not hire you. Please refrain from helping us with your PR.
Thank you for choosing AMD and Windows for gaming,
AMD Management
Ok AMD, that’s nice. Now if you could sell me a V-class FirePro card with some outputs (unlike S-class) and dual Fuji chips, that would be even better. Please?
NOT passively cooled, as the spec sheet (and TR) claims. The card just doesn’t supply the fan. You still need to move 20 cfm over at a max inlet temp of 45 C.
The card is rated at 275 W. That’s not something you passively cool in any reasonable definition of the term.
4GB is adequate for [b<]current gaming[/b<], but not ideal. It'll have to do for first-gen HBM with driver optimisation attempting to pickup the slack. 4GB is inadequate for workstations. They [b<]do not run game engines[/b<], they run DP compute and load vast, unoptimised things without RAM constraints. Got more RAM? you can fit larger models in a 3D viewport. Failure to recognise this fundamental difference will make you look ignorant.
You have obviously not received one of the flyers he hands out at school during recess.
Hawaii needs like 18 and a half gddr5 chips because it’s got a wider bus than something like a 980 ti. So they had to use smaller chips to maintain a reasonable capacity and cost.
But smaller chips are old and aren’t built to run as fast, or as efficiently.
When they wanted to rebrand Hawaii, they needed to increase memory clocks and they needed better gddr5. The only option is to use the same modern gddr5 that Nvidia has been using. But as mentioned, to populate their wide bus, they needed a ton of the stuff.
Because you’re acting like an asshole.
It’s possible to be correct and still an asshole. But people only notice the asshole part, so it doesn’t really matter if you’re correct or not.
Maybe it’ll work if we see cpus with different amounts of ram.
You’ll get your “i3” with like 4gb of ram. And you’ll have your i7 with like 16gb of ram. Intel wild definitely go for that, as you suggested.
But then the oems also want a dirt cheap machine with as much ram and storage as possible because those are two important marketing metrics.
I’m not sure Intel would be game for putting 16gb of ram on the package of a dirt cheap cpu.
Would be doable on a quad GPU AMD card. I think there even was one card maker doing that, but I cannot find any source.
Sorry, not built for games.
Built for Computational Fluid Dynamics (CFD) and other GPGPU problems and maybe VDI.
what about the 8gb 390s?
oh and chuckula was just being chuckula and being funny with a point
workstation with 32gb gddr5
high end gaming (fury) with 4gb hbm
mid/high gaming (390) with 8gb gddr5
the choices of the amount of memory they put in cards seems to be rather odd
amd: “this is the fastest card we have – lets only use 4gb of the fastest memory available, it is enough right?”
amd: “lets rebrand these 290’s into 390’s but we cant just change the first digit – I KNOW lets double the ram in it and say games need more ram now”
amd: “ok now this is our high end workstation card it could use that nice new hbm….” another amd person: “NAH mate just shove as much of that old gddr5 in there and call it a day – i want lunch”
im totally not an amd fanboy
but 4gb is enough for now on graphics cards – 8gb was just stupid from amd imo, they should have kept the 390 series 4gb and either reduced prices even more or at least kept them the same prices
pretty much anything that uses more than 4gb will reduce the framerate down to unplayable levels
if there is a case where you can use more than 4gb and it will be above 60fps/16.7 ms most of the time and not be choppy then i dont think reducing a setting or two will hurt anyone
FP64 was cut down to GCN’s lowest level to save transistors yes.
I’m guessing AMD would have cut 4-8CU off to enable it (maybe make the die hit the 600mm^2 limit also) to hit FO64.
Most uses for FP64 require lots of RAM, so, it didn’t make much sense for AMD to include it. Well, rather, to include enough it would have costed them more than the likely return.
Hope you calm down pretty good/soon/well with games. My gaming today has been… Not very calming.
NoOne… I would like to reply, but I’m now calming down and playing some games. So, I can’t.
64bit fp is limited in Fiji to save transistors? I suppose so. Fiji’s design was 28… if AMD had any 20 designs they killed them when the impossibility of producing them became apparent. But I didn’t say that, because I’m trying to calm down and are not looking to see if anyone has replied to my post. So forget I wrote this. Because I didn’t…
Chuck-less may be a total unpleasant ass, but I admit, I have no real grounds to call him a moron.
Gosh! I know I did. Regretted that a bit after I posted. But still: he really attacked me, and I’ve put up with a lot of insulting rubbish from his direction and his kin. I dunno, maybe I’m don’t really fit here? ‘Cos of my nature I’m over sensitive. I’m now going to play some games and try and clam down!….
AMD did not design HBM itself. It did do a great deal of the work, possibly over half of it. However, it by no means did all of it. I would guess 2/3rds at most. Which, is still a huge amount.
There are some workstation applications where Fiji silicon should be a good fit, although, the number of cases is probably in the single digits.
FP64 (aka “DP”) is 1/16 in Fiji, the lowest GCN can be configured for FP64. The biggest reason for its existence is 20nm being DOA for big chips and also because AMD understands doing FinFETs, new memory controller, new memory type, new architecture (I assume GCN will have a major revision for 14/16ff) all on a immature mode is dumb.
(Nvidia has historical issues with fabs, as well as having memory controllers being worse. I personally think there should be a 28nm GM212 with HBM (20SMM + 4096-6144 bit bus) to help with issues).
While some parts of Chuckula’s statement I feel are nonsense he is by no means a moron. Nor as far as I can tell not anything Remotely close to a moron. His wording and tone might need a little bit of work at times. Same for me and countless others however.
[waiting to get told I’m super pro Nvidia and to laugh when someone says so.]
You did open with name-calling.
Why? Because I disagree with you? Or because my post is: offensive, corrupt in some way?
Because you, Mr.Cuckula, have paid an amount of money, whilst I have paid nothing, do you imagine that you will be treated in an exceptional way from the moderators of this site?
WHAT DID I POST – that was so offensive that you seek moderators help in shutting my mouth?
Normally downvoting would be enough, chuck-empty-of jokes.
Moderators, please treat this post like the toxic waste it is.
Dear Insufferable-Moron,
HBM was designed by AMD several years ago. If, they had managed to actually make products with it – then – the four gig limit wouldn’t have been a problem. They didn’t: so it really is.
Fiji, I guess can’t be utilized as a workstation product – not just because of the ram, – obviously workstation uses massive memory – but even after that, according to AnAnd, the dp has been reduced to 1/32: in hardware.
So it seems we have this odd scenario: Fuji is overwhelmingly aimed at gamers, and not the high-priced workstation market. I have no idea what AMD are playing at. All I can guess is that they are coasting till the next-gen fab processes are out… in 2016. Along with Zen.
For myself… I give up.
But! Your snide and silly comments are a bit useless. I’m a little embarrassed that no one wants to call you out on it – but I guess you’re an established figure here: and that probably counts for more then any actual reason.
Edit: Severe grammatical errors! Not changed the actual content.
are you saying that my quad 12 core Xeon setup running at 2Ghz each with 16 of these cards isn’t a good gaming setup? =[
/troll
having RAM only on package is more profitable for Intel and can be just as profitable if not more profitable for OEMs. Cutting the supply chain down.
Saving a little bit on boards, allow for smaller cheaper chassis, etc.
The way forward for consumer parts is this. The biggest reason being laptops. HEDT parts may be cut down from Xeon dies to keep those around.
No way that happens, RAM upgrades are way too profitable for OEMs.
Intel only gouges on the features that enthusiasts would care about. I’m talking about overclocking, multi-GPU, eDRAM, bigger iGPUs, faster DDR4, excessive cores, etc.
The core things that matter on an OEM machines are RAM capacity, storage capacity, core count, and clock speed. That’s about it. You can get a pretty full spectrum of those traits with pretty run of the mill parts.
The only exception that I see is that we might get tiny NUC-esque motherboards without DIMMs, but those aren’t mainstream.
you want more RAM than what’s on package? Send your motherboard/laptop into [OEM HERE] and you can get an upgrade for only [COST THE SAME AS MAKING NEW COMPUTER] with free shipping!
I do think that most products will sadly castrate the DIMM support and sell it as a super awesome feature like Intel’s K series. Although, I hope not.
I believe some display company for signage is using AMD cards for their newer stuff. Maybe someone can hack the software into enabling more displays?
Well, I assume they manage to get more than 6 displays for digital signage purposes.
Yes, that was my point, different markets need all that RAM 😀
They also recently stated for the first time starting probably some months ago that they have 2 engineers dedicated towards reducing RAM usage. I wish them success. As, otherwise a lot of Fury X and future Fiji silicon card owners get burned bad.
Bad for AMD and bad for those consumers.
That’s probably how future CPUs will work. At least the ones with massive GPUs on-package.
Throw 8-16GB of pricey HBM on the fire and back it up with ~32GB of cheaper DDR4.
Om nom nom nom. I’ll eat that stuff up!
Any gamer that buys this GPU for any-K gaming might as well as buy a GPU that actually has display outs.
Any gamer that buys this GPU didn’t notice that it doesn’t have any ports to connect to a monitor!
Fluid sims! I would very much like a 32GB GPU, if it wasn’t so prohibitively expensive that I could just buy an entire second workstation for the same money, which would have much more varied useful applications. I don’t need a fast 400-megavoxel smoke pipeline *that* badly 😛
Any gamer that buys this GPU for 8K gaming, might as well as buy a laptop with a GT 610M that has 2GB DDR3 VRAM.
On a serious note, I wonder what kind of CAD renderings or other applications would call for 32GB of VRAM?
Well….YEAH, but it’s a SIGNED poster. 😉
By ‘look it up’ I think you mean ‘glance upward at the poster hanging above the monitor’ right? 😛
You could get that number off of a single card in theory using four port MST hubs. The catch is that AMD limits the maximum number of logical displays to just six. Of course the key point is logical displays as I’ve seen some MST hubs produce a single large virtual surface from mulitple displays (i.e. two 1920 x 1080 displays will appear as one 3840 x 1080). So conceivably it would be possible to drive 24 displays this way but total resolution wouldn’t be greater than having six 4K monitors.
I’m still skeptical that it’s really [i<]that[/i<] big a deal in most scenarios. Personally I don't imagine needing to replace the GTX 970 I splurged on this year for a [b<]very[/b<] long time, and I really could have gotten by on the GTX 660 I used to have indefinitely. Now a card intended for heavy professional rendering and compute scenarios has a different set of intended uses and outcomes. From personal experience I'll tell you that 32 gigs of memory could [b<]all[/b<] be used for seismic processing applications, and you'd still end up wanting more.
Are you suggesting a hybrid might be possible…
A couple of Fury X’s “AND” 32GB of GDDR5?
but not timely 😉
I demand 2^64 miniDP =]
[quote<]4GB of VRAM is enough for gaming currently when your drivers are optimized for VRAM.[/quote<] Sure, but what evidence has AMD presented that their drivers are thusly optimized? P.S. --> a link to that quote from the AMD executive who flat-out said that AMD's drivers are [b<]not properly optimized[/b<] to use RAM efficiently isn't proof of the contrary. That's like saying that a doctor who diagnoses his patient with cancer has already cured the patient just because he spotted the problem.
My personally bigger issue is that no one have shown it to be a problem with playable framerates from other cards so far. If I recall correctly.
I am dubious that it will last for the 1.5+ years it will need to, but, AMD has engineers working on minimizing memory used by games for the first time in ever.
And, unless some site starts measure VRAM usage while playing through games, it is hard to know how much that pays off. It could be a massive half size used in some games, and near zero in others. Wish there were really good tools to cover this. Would be neat.
Also irked that low VRAM amount wasn’t a problem until now. 680 2GB, whatever. 780ti 3GB, whatever. 970 3.5GB, no problem. I would say that the former 2 were problems and the latter isn’t. No one else seemed to recognize that, or say it. !@#$
Why do you have PR/Marketing in there. AMD doesn’t have those, they just have someone from off street throw numbers into powerpoint and hope that they aren’t to overstated!
Dear chuckula:
4GB of VRAM is enough for gaming currently when your drivers are optimized for VRAM.
(4GB is still on the low side, Fiji cards really did need at least 6GB to be truly future proof imho. However, we will see how memory optimizations will be able to keep 4GB enough. For AMD’s sake, better me at least until 1Q2017).
FP64 and HPC applications meanwhile, can almost never have enough RAM.
Rest of your comment is basically a bunch of stuff you don’t understand about the market, HBM, tape out costs and similar.
Or, you understand those things and choose to ignore them.
I can write details on it if you really care to read them. Most of it has to do with costs, what market segments need, and some needling about “blind faith” as many people (no clue if you do!) who say AMD lovers have blind faith have blind faith in Intel/ARM/Nvidia/Apple/etc.
If anyone has blind faith in AMD for whatever reason they should read the disclaimer on their most recent announcement lol! Huge wall of text that pretty much explains the market and why you SHOULD NOT have blind faith in AMD (or Nvidia, or Intel, or ARM or pretty much any other company)
… horses for courses. This is a server card, and while I’m not familiar with server GPU workloads, it seems that both Nvidia and AMD have high memory cards. Unless they’re just throwing memory at a PCB and waiting to see if it sticks, I’d guess there’s a market, and I suspect the people who are saying 4GB isn’t enough to play X game aren’t it.
As has been said, HBM generation 1 is supposedly limited to 4GB. They could be working that into their server lineup, but not in this particular model.
Twenty-four? What kinda cut rate operation do you think we’re running here? I won’t rest until I can buy a card with 144 mini-DP outputs! I want to fall asleep in a tangle of adapters, and wake up bathed in so much emitted radiation from the monitors that I turn into the Hulk.
I had to look it up…I swear.
Yes, yes, mental gymnastics to defend first generation HBM’s max size of 4 gigs are tiresome. And fanboys are the devil. That said, the 32 gigs here won’t be wasted in heavy compute scenarios or in handling gigantic rendering workloads.
Honest answer, they’ll wait until HBM2 where you can get 8GB per stack and have 32GB cards.
All these years and I never knew Timberlake was from NSync.
Wake me when there are twenty four mini-DP outputs.
Camp Granada is hopping this time of year.
[url<]https://www.youtube.com/watch?v=9jjiWS__Mp0[/url<]
Reply from AMD:
[quote<]Dear Chuckla, Thank you for your blind following. As for the rest of your letter, we will respond as soon as our PR/Marketing/Accountant/Driver guy gets back from summer camp. Have a great summer! [/quote<]
Well, that’s all fine, but can it run Minecraft at 60 FPS at 4K with NSync on (yeah, the 90s group with Justin Timberlake)? Well, can it?
Dear AMD:
We need a little help here. Are we supposed to viciously attack anyone and everyone who claims that 4 GB of RAM is too little or that 32 GB of RAM is overkill? Or both? It’s still OK to launch personal attacks accusing people of being in a conspiracy for not having faith in HBM when most of AMD’s “new” product line, including the high-end professional parts, doesn’t use HBM at all, right?
Are we OK to just attack anyone and everyone who questions AMD no matter what the product is without the need for any form of logical consistency? We think we can do that!
Can you provide us with some prepared talking points that we can cut-n-paste to demonize the editors of TR and any other website that fails to accuse your own PR department of not being pro-AMD enough?
Blindly worshipful as always,
Your Fanboys
I wonder if we’ll see a W9170 with 32 GB of RAM that’ll add six mini-DP outputs for workstation graphics.
Now [i<]that's[/i<] a spicy meatball.