AMD doubles down on server graphics with FirePro S10000

Remember that dual-GPU monster Mark Papermaster teased at the AMD Fusion Developer Summit in June? Well, it’s now official. Say hello to the FirePro S10000, AMD’s new flagship graphics product aimed at the server market.

The FirePro S10000 packs not one, but two Tahiti GPUs on a single circuit board. On the desktop, Tahiti GPUs power AMD’s Radeon HD 7900 series. Here, the chips sit under a trio of fans perched atop a dual-slot cooler. All told, the card pulls 375W of power at peak, or about 67% more than AMD’s previous flagship in this space, the FirePro 9000. At the same time, the dual-GPU card offers a substantial increase in floating-point performance over its single-GPU sibling. Here are the specs:

  Shader

ALUs

Core

clock

(MHz)

Peak

SP tflops

Peak

DP tflops

Memory

capacity

(GDDR5)

Memory

interface

width

Memory

bandwidth

TDP
FirePro S9000 1792 900 3.23 0.806 6GB 384-bit 264 GB/s 225W
FirePro S10000 3584 825 5.91 1.48 6GB 384-bit 480 GB/s 375W

AMD claims the FirePro S10000 offers the highest double-precision floating-point throughput per watt of “any board currently on the market”—and, thanks to VMware and Citrix virtualization software, two users can purportedly share a single board.

The chipmaker’s marketing documents compare the S10000 to Nvidia’s Tesla K10, a similar dual-GPU design featuring a pair of GK104 chips (the same ones that power the GeForce GTX 680). The Tesla K10 has lower peak floating-point performance: 4.58 and 0.19 teraflops, respectively, for single- and double-precision computations. Its memory bandwdith is only 320GB/s. Also, the Tesla is targeted solely at general-purpose computation tasks, while the FirePro S1000 has a healthy assortment of display connections, including one DVI port and four Mini DisplayPort outputs. That said, the Nvidia card’s 225W power envelope is quite a bit tighter than the new FirePro’s 375W TDP.

According to AMD, the FirePro S10000 is already available and carries a suggested retail price of $3599. The single-GPU FirePro S9000 was priced at $2,499 when it came out in August, and the Tesla K10 currently sells for $3275.99 at Amazon.

Comments closed
    • Mr Bill
    • 7 years ago

    Perhaps this would have been a good upgrade for the Titan supercomputer; but too late…
    [url<]http://www.anandtech.com/show/6421/inside-the-titan-supercomputer-299k-amd-x86-cores-and-186k-nvidia-gpu-cores[/url<]

      • sschaem
      • 7 years ago

      To late. Cray just announced that it will not develop any more AMD based systems and is moving exclusively to Intel for the CPU and GPU will continue to be nvida (possibly intel??).

      AMD is out of the HPC market for good.

        • Mr Bill
        • 7 years ago

        As explained in the article, this was a cpu swap upgrade situation and thus the continued use of AMD. I understand the Nvidia commitment is because they have the rendering software support as mentioned in AndandTech’s article.

          • sschaem
          • 7 years ago

          The article clearly state that the system was NOT using AMD GPU before, it only had a bunch of nvidia nodes. Dont believe me? read what you link.

          If Cray wanted to use AMD GPU it would have, AMD would have been pleased to deliver SXM cards.

          Reality: Cray is not interested in AMD GPGPU, Cray discontinue all AMD related R&D, they are all Intel now for their CPU architecture.

            • Mr Bill
            • 7 years ago

            Nowhere, did I state that the system was previously using an AMD GPU. I was referring to the use of the AMD CPU both prior and for the upgrade. Try not to see this as an adversarial post. The excerpt below explains why the “but too late” of my first post.

            “… AMD CPUs and NVIDIA GPUs

            If you’re curious about why Titan uses Opterons, the explanation is actually pretty simple. Titan is a large installation of Cray XK7 cabinets, so CPU support is actually defined by Cray. Back in 2005 when Jaguar made its debut, AMD’s Opterons were superior to the Intel Xeon alternative. The evolution of Cray’s XT/XK lines simply stemmed from that point, with Opteron being the supported CPU of choice.

            The GPU decision was just as simple. NVIDIA has been focusing on non-gaming compute applications for its GPUs for years now. The decision to partner with NVIDIA on the Titan project was made around 3 years ago. At the time, AMD didn’t have a competitive GPU compute roadmap. If you remember back to our first Fermi architecture article from back in 2009, I wrote the following:

            “By adding support for ECC, enabling C++ and easier Visual Studio integration, NVIDIA believes that Fermi will open its Tesla business up to a group of clients that would previously not so much as speak to NVIDIA. ECC is the killer feature there.”

            At the time I didn’t know it, but ORNL was one of those clients. With almost 19,000 GPUs, errors are bound to happen. Having ECC support was a must have for GPU enabled Jaguar and Titan compute nodes. The ORNL folks tell me that CUDA was also a big selling point for NVIDIA.

            Finally, some of the new features specific to K20/GK110 (e.g. Hyper Q and GPU Direct) made Kepler the right point to go all-in with GPU compute…”

    • chuckula
    • 7 years ago

    AMD has an interesting proposition with these cards. If your project isn’t married to CUDA and is something that would do well with GPGPU, then either one of these cards would be fine in a workstation-class machine (yes, even the 375watt one, that’s why we have big PSUs and the power draw isn’t that scary on an individual machine basis). On a huge Top-500 compute cluster, the 375 watt figure becomes a bit more of an issue though, but the S9000 cards aren’t too shabby since I’m not letting the peak Tflops number dictate my analysis.

    Unfortunately for AMD, Nvidia’s relationships in the large-scale HPC world give it a leg up in getting design wins in the Top 500. The good news is that GCN is really AMD’s first architecture that is strongly compute focused, so they can only go up from here. Also, OpenCL is gradually getting more support too (see next-generation Folding @ Home as one example).

      • beck2448
      • 7 years ago

      Nvidia dominates the pro market at 90% because they provide total software and hardware solutions and programmers, as they did for AVATAR, one of the most visually complex films in history.
      The benefits achieved with the solution co-developed by NVIDIA Research’s Pantaleoni compelled the CGI lab to further embrace GPU computing – the power of NVIDIA technology to perform massively parallel computing. NVIDIA ported Weta’s PantaRay engine to a CUDA-based GPU driven version that runs 25 times faster, utilizing an NVIDIA Tesla® S1070 GPU-based server instead of a CPU based server. This allowed rendering that had taken a week or more to be done in a few hours.

        • ztrand
        • 7 years ago

        remember that article a while ago about marketing shills on TR needing to clearly state who their owners where? Yeah, that was about you.

    • albundy
    • 7 years ago

    that’s a lot of zero’s!

    • OU812
    • 7 years ago

    The S10000 uses 60% more power usage (375 watts) than the K20x (235 watts) yet only performs 13% faster (1.48 TFlops DP in peak) than the K20x (1.31 TFlops DP in peak).

    With the S10000 being a dual GPU card and the K20x being a single GPU card that 13% may disappear (and even turn negative) in real world performance since data between the two GPUs need to travel over an external bus vs staying internal on the K20x.

      • ronch
      • 7 years ago

      Oak Ridge seems to have made a bad choice by choosing Bulldozer-based Opterons (1st gen, mind you…. the deal was inked years ago), but they clearly did well in choosing Nvidia for 90% of their computational resources. Imagine ORNL pairing S10000’s with those hot Opteron 6000 series chips!

        • OU812
        • 7 years ago

        Hotter and Slower seems to be the AMD of today as compared to both Nvidia and Intel.

          • BestJinjo
          • 7 years ago

          HD7970 Ghz is the fastest single GPU for games in reference form, Sapphire HD7970 Toxic 6GB is the fastest single GPU this generation out of the box, while Matrix HD7970 OC would be the fastest single GPU maxed out overclocked on air against any GTX680. Your statement is not exactly accurate.

          [url<]http://www.techpowerup.com/reviews/AMD/Catalyst_12.11_Performance/23.html[/url<] From a performance/watt point of view, NV has a superior tech but AMD's top single GPU card is actually faster (HD7970 Ghz > 680). With K20/K20X, NV will most likely retake the performance crown next gen. As far as power consumption goes, there is not much to it. "The XFX Double D HD 7970 GHz Edition seemed to draw less power across all games than the NVIDIA GeForce GTX 680" [url<]http://www.hardocp.com/article/2012/10/30/xfx_double_d_hd_7970_ghz_edition_video_card_review/11[/url<]

            • OU812
            • 7 years ago

            You can not use toy gaming cards for reference when the subject is servers.

            These facts are real and readily available.

            FACT: AMD Server CPUs use more power than Intel XEON CPUs and perform slower.

            FACT AMD Professional GPUs use a lot more power (375 vs 235 watts) and only perform maybe 13% faster.

            See Hotter and Slower.

            • BestJinjo
            • 7 years ago

            I am not comparing compute vs. gaming cards. Your statement said:

            “Hotter and Slower seems to be the AMD of today as compared to both Nvidia and Intel.”

            You didn’t specify that you were talking about compute cards only. You also continue to ignore the flexibility of having both class leading single and double precision in S10000. If you want that from NV, you have to buy a K10 and a K20 side-by-side…

            • Haserath
            • 7 years ago

            Don’t bother trying to compare gaming cards versus compute. Nvidia is winning hands down in compute.

            • BestJinjo
            • 7 years ago

            They are if you are talking about professional compute. However, that’s mostly because of the great relationships NV has fostered with its customers, its excellent support after purchase, and prevalence of CUDA apps. Theoretically speaking, S10000 offers class-leading Single and Double precision performance, beating both K10 and K20X.

            As far as consumer compute, AMD is actually winning hands down – bitcoin mining, distributed computing projects (almost all modern projects run faster in AMD GPUs), DirectCompute graphics effects in games (Sleeping Dogs, Dirt Showdown, Sniper Elite V2, Hitman Absolution). Again, blanket statements such as “NV is winning in compute” is simply not true since I can’t go out and buy a $500 GPU that’s still fast in games and gives me cheap compute. Also, some people use OpenCL acceleration and once again HD7000 series gives you that for cheap.

            For professional uses, of course NV is winning given the CUDA software wide support and the fact that NV created this market while AMD is only now taking it seriously. For us regular people who don’t spend $3000-5000 on a GPU, NV went backwards from GTX580 and HD7970 demolishes GTX680 in consumer compute/OpenCL/hasing/distributing computing power.

    • OU812
    • 7 years ago

    First off what Server can handle a 375 watt card?

    Second are not server cards passively cooled?
    Is the S10000 even available passively cooled?

    Nvidia just released their K20 (225 watt) and K20x (235 watts).
    We know the K20x is available passively cooled.

      • Meadows
      • 7 years ago

      You do realise that servers aren’t run on potato batteries, right?

        • UberGerbil
        • 7 years ago

        If ARM servers take off somebody will try.

        • OU812
        • 7 years ago

        Servers are densely packed and have a limit on heat and power.

        The S10000 seems to have problems with both.

        So again, I ask what server can handle this 375 watt card?

          • Meadows
          • 7 years ago

          Not all servers are so dense.

            • OU812
            • 7 years ago

            So why are you not providing any links to said servers that can accept a this 375 watt PCIe card?

            • Meadows
            • 7 years ago

            You started throwing claims around first, why try putting the burden of proof on me?

            Show me why every server has to be a blade server. I’m waiting.

            • Stranger
            • 7 years ago

            [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16816152125[/url<] 30 second search on newegg. I have a feeling anyone who is interested in GPGPU computing is probably going to be using a specialized chassis to deal with the power use of the cards ala [url<]http://www.cray.com/Products/XK/XK7.aspx[/url<]

      • ish718
      • 7 years ago

      Maybe you missed this part
      [quote<]Also, the Tesla is targeted solely at general-purpose computation tasks, while the FirePro S1000 has a healthy assortment of display connections, including one DVI port and four Mini DisplayPort outputs.[/quote<]

      • Airmantharp
      • 7 years ago

      This doesn’t even really look like a server part-

      See the fans on the side? Not blowers? You’re not putting more than two of this in a box. See that TDP? You’re not going to WANT more than two in a box.

      This is a workstation product, and not a very good one. AMD is trying to compete by releasing quantity over quality.

    • R2P2
    • 7 years ago

    Can we have a moratorium on “double down” when not referring to the KFC “sandwich”? Holy crap that’s overused lately. I’m not even sure if people are using it correctly.

      • BIF
      • 7 years ago

      KFC Sandwich? Even that is overuse of the word.

      “Double down” should only refer to backgammon and card games.

      • MadManOriginal
      • 7 years ago

      You’ve just virtually guaranteed that it will show up more, not less. Good job.

        • dpaus
        • 7 years ago

        “Purportedly”, anyway…

    • Chrispy_
    • 7 years ago

    Is the OpenCL compute power of the desktop Tahiti compromised by intentional driver sabotage?

    We have OpenCL renderers running on desktop 7970’s here – VRay for 3DStudio seems to be pretty quick on them and that’s a lot of our compute requirement fulfilled right there. They’re certainly more cost-effective than throwing jobs at the 18-node i7 renderfarm.

    This S10000 card is the equivalent of a “certified” 7950 crossfire setup, and I’m not aware of any performance gain from it being “certified”.

    edit – I see sschaem has already asked this and got sent down in flames, but nobody has actually answered the question yet.

      • ET3D
      • 7 years ago

      I think that “intentional driver sabotage” is taking it too far, but yes, traditionally professional cards have been accompanied by drivers which are certified for certain applications and provide higher performance for them. The negative view is wrong though. Nobody is going and deliberately making the card slower, it’s just that consumer drivers are optimised for games, not professional applications. (And they still work pretty well for these applications.)

        • Chrispy_
        • 7 years ago

        I wasn’t trying to be negative – The Geforces and Quadros are on identical silicon so they cripple the Quadro-specific features on a Geforce via driver & firmware for product segmentation.

        Quadro cards run certain code (largely to do with wireframe aliasing and z-depth accuracy) at a much faster rate than Geforces. I’ve traditionally still bought the Geforces since the performance delta in our software averages about 3x but the cost delta has been closer to 8x.

      • dpaus
      • 7 years ago

      [quote<]I'm not aware of any performance gain from it being "certified"[/quote<] I'm disappointed: anyone with a renderfarm should know that the huge price differential for a 'certified' component buys you a quantum leap in [i<]support[/i<], not performance.

        • Chrispy_
        • 7 years ago

        I beg to differ; Quadro/FireGL support seems no better than Geforce support. Nvidia have passed the buck to both Autodesk and Bentley Systems in the past. My FirePro issues lay with McNeel and the solution magically appeared fixed in the next Rhino update, not the next AMD driver update.

          • dpaus
          • 7 years ago

          I bow to your apparently greater experience with Nvidia support, but I know that the difference we get in support for FirePro products vs Radeon products certainly explains a lot to me about the difference in the price points.

        • moose17145
        • 7 years ago

        Im disappointed in your ability to read. The way it sounds, he uses a render farm at work. Meaning his place of employment owns said renderfarm. Not him. And just because he uses the render farm does not mean that he has a firm grasp of the subtle differences between consumer level and professional grade videocards.

        I dont think i have ever seen techreport do a review on a professional grade gpu, let alone explain what exactly it is that price premium is paying for compared to a consumet grade part. And also just because you have to use a renderfarm at work doesnt mean you have any understanding of the hardware that is doing the labor of rendering. From what i gather its his kob to know how to use the rendering software, not how all the hardware for it works. But he is here, apparently trying to learn and asking a legitimate question. How about we help him instead of being smartasses.

        Sorry but im getting tired of this elitism bs attitude that seems to have developed on the front page srticle comments. Anymore it seems like anyone who asks a question because they dont understand gets bashed and down thumbed like crazy. Why is beyond me.

        And in reply to what the difference is between certified parts and consumer level garbage. A certified part has drivers that are more optimized for professional software and less for games. In fact a 7970 will likely perform better / faster in games than the s9000, even though they are both made from the same silicon. The s9000 though will perform much much better in professional software like CAD software and the like. In some cases the consumer level card may not even work at all for the professional level software. While the certified part will work perfectly. Likewise i have seen some seriously high end certified parts totally fail at being able to play certain videogames while a mid range or even low end consumer part can handle it no problem. It’s all about how they optimised the drivers.

        As dpaus mentioned as well, a large part of the cost is also for the support you receive with these certified cards. In many cases if a firm is having an issue with one of these cards, and it can be tracked down to the drivers, or at least fixed in the drivers, and often times amd / nvidia will write a custom driver patch for that one company to fix their issue. That is obviously expensive to do. But so are these cards and that is why you payed the premium for a certified card instead of a much cheaper gaming card.

          • dpaus
          • 7 years ago

          [quote<]im getting tired of this elitism bs attitude that seems to have developed on the front page srticle comments[/quote<] Was that comment intended for me or for the group above that were calling each other 'childish', 'high-school level of understanding', 'going too fast for you', 'in need of hand-holding', 'not very bright', 'no idea what you're talking about', etc? 'Cause I thought 'disappointed' was pretty mild....

            • moose17145
            • 7 years ago

            Think it was more of a general rant about the issue as of late. You were just the unfortunate one to receive it. I had read your reply after i had just gotten done reading a bunch of other responses that were rather … snooty… i guess you could say…

            On another thread someone asked a legitimate question because they didn’t understand something, and got down voted for it. It was more than just a single down vote too. I called it out saying “wtf is with the down votes for asking a legitimate question” and i get down voted and someone replied “oh you must be new here, welcome to TR.” I have been here at TR since 2003 and it never used to be this bad. People would ask even simple questions because they were new to the whole system building thing, and people would reply and generally seem to help one another out. In fact, back then i would actually directly e-mail the authors of the articles they would get back to me within a day or two with answers to any questions i had. And back then i was brand new to this whole computer building thing and still in high school… and had some (looking back at it) rather basic questions that would indicate a general lack of understanding towards many of the key technologies inside a desktop PC. But here on the TR there was never a shortage of people being more than willing to help me understand. This site is in large part a reason why i was able to get into building my own systems and took it up as a hobby. Because the community seemed to be more than willing to help those new to it and to help one another out. Anymore it just feels like “oh you don’t already know this? well piss off then for not being as good as us”. Idk what changed… but either way the community here at TR seems to have gotten much less receptive to people asking questions because they don’t understand something. Especially on the front page articles.

            Edited to fix a sentence that I had completely failed at writing out. Never a good thing trying to type down one thing while thinking about something completely different. The two trains of thought just get jumbled into something incomprehensible. 🙂

            • dpaus
            • 7 years ago

            I totally agree – I’ve learned much from the generous souls on this site who have freely given their knowledge, their experience and the time it takes to try to impart them to noobies.

            There are some on here who simply downvote anything and everything from certain posters. It must suck to be them.

            • Chrispy_
            • 7 years ago

            I generally wouldn’t get too worked up about the whole post-voting thing;

            I mean, look, my post above is on -2 and it is nothing more than a statement of fact – no opinion to like/dislike/agree/disagree with; Just a plain statement of what happened and how it was resolved.

            For those that will downrank [i<]this post[/i<], please don't; My personal well-being and emotional stability as an anonymous poster is directly linked to being 'liked' by the button-clicks of other anonymous posters. Negative votes turn me into a kitten-murdering psycopath. For the love of god, [u<]THINK OF THE KITTEHS.[/u<]

          • UberGerbil
          • 7 years ago

          There are other issues with the pro cards and certified drivers that go beyond mere performance and support. I don’t know if it’s still true, but I know that years ago in Autocad there were situations were you could actually see the difference in renderings where the increased precision (at the expense of speed) meant the pixels were showing up in precisely the right spot. Being off by a tiny amount won’t even get noticed in one frame in sixty during a firefight, but it can cost millions in a model that is being studied intently by professionals.

          More importantly (assuming those kind of glitches are no longer an issue) there’s liability. Just about any professional contract will have some boilerplate language about “standards and practices” that requires conformance to what everybody in the profession generally does. In the event of some failure that leads to litigation you don’t want to be a position where you’re revealed to be using consumer equipment and drivers when the norm in your industry is to use pro-level products. And that’s true [i<]even if that failure had nothing whatsoever to do[/i<] with the computer modelling or IT systems in general, and even if the use of consumer-level products would have made no difference.

    • fellix
    • 7 years ago

    Considering that AMD is the only player in town providing the most up-ti-date OCL support — both feature and performance wise, this dual FireGL monster should actually appeal quite well to any professional dealing with GPU-accelerated off-line rendering. Sure, there’s the CUDA-based Octane renderer, but the selection for OCL is broader with some free offers.

    • Arclight
    • 7 years ago

    Papermaster is such a tease…..

    • EV42TMAN
    • 7 years ago

    I love how the post infers that some one buying this video card would use it for anything else other then GPU computing. That being said, to answer someone else comment about heat. If being given used in a real data center environment, there will be only be one or two of these cards in a 1 or 2u rack mount server. So there will be plenty of cooling, also they probably did the same thing nvidia did will tesla cards. If you’re installing tesla cards into servers that use passive cooling (1 or 2u rack mount servers) you can remove the fan part of the heat sink for optimal cooling.

    • sschaem
    • 7 years ago

    Anyone done some openCL benchmark comparing a 7970 to a s9000 ?

    Seem to me a 7970 would be slightly faster and cost $380
    The s1000 should be , what, 80% faster and cost $3600

    10x the price for not even 2x the performance, is this destined to some US government agencies ?

    edit: well, if AMD can sell dozen cards they will double they their revenue this quarter.

      • Waco
      • 7 years ago

      I am AMAZED that AMD released this for “server” use with the heat sink designed like that. Stack a few of them up in a serious production server (with no gap between the cards) and they’ll burn up in no time.

      Granted, they probably aren’t certified for that, but it seems like they’d have a lot better luck creating a card with a blower on one end (that pulls from the edge of the card) than this monster that can’t be installed next to any other PCIe cards due to its cooler design.

      EDIT: For workstation use, sure, it’s fine. I just don’t see it going in many servers like that.

        • destroy.all.monsters
        • 7 years ago

        Exactly what I was thinking.

        • cynan
        • 7 years ago

        [quote<]Stack a few of them up in a serious production server (with no gap between the cards) and they'll burn up in no time.[/quote<] Well you didn't think they called in "FirePro" for nothing did you?

        • nafhan
        • 7 years ago

        The three fan thing pictured in this article is absolutely a workstation card – not a server card. I would never dream of putting this in typical rackmount server. If they build one of these for server usage, it will be “passively” cooled. Passive is in quotes because it would be taking advantage of the massive airflow from the case fans. With servers, you really don’t want to see add-on cards with their own fans. That’s both asking for failures (case fans have built in redundancy) and likely to disturb the optimal airflow path while adding heat to the case. There are pictures of Nvidia’s K20X cards on Anandtech right now. I would expect a “server” version of this card to look VERY similar to that.

          • Waco
          • 7 years ago

          Well then TR posted the wrong picture to go with their article then. 😛

          I expected a bigass heatsink but I’m assuming AMD marketing thought the workstation version looked better for the press release.

      • chµck
      • 7 years ago

      How old are you? That’s -maybe- high school level understanding of business.

        • sschaem
        • 7 years ago

        My post is to show how this card target a limited subset of the server market.

        Not very bright aren’t you ?

        Hence: AMD wont make much money even at those insane prices (unlike nvidia)
        also its a stab at AMD for having its revenue shrink at alarming rates…

        Its also based on cost performance for opencl type apps, this card makes no sense.
        (Not at $3600)

        Do you need any more hand holding ?

          • Jason181
          • 7 years ago

          Let’s see… K10 @ .19 Gflops DP versus S10000 at 1.48 Gflops DP. EIGHT TIMES the performance for 10% more. Yeah, this is totally gonna flop (pun intended).

          You really have no idea what you’re talking about. The market is there for these, and they will do well.

            • sschaem
            • 7 years ago

            I dont want to be rude, but I didn’t start the belittling in this thread.
            So I guess I’m going to fast for some of you 🙁

            7970 ghz edition 1.03 TFLops DP $380
            [url<]http://www.amd.com/us/products/desktop/graphics/7000/7970/Pages/radeon-7970.aspx#3[/url<] s10000 1.48TF DP $3600 For compute the s10000 bring little performance improvement for 10x the price. The s10000 is a niche product for some limited virtualization market only.

            • flip-mode
            • 7 years ago

            Indeed, you did not start the belittling. Thumbs up for you! Don’t get sucked into it. It’s disgraceful, in every sense of the word.

            • khands
            • 7 years ago

            The whole market for these cards are based around application specific certified drivers, which you simply won’t get on a consumer card.

            • flip-mode
            • 7 years ago

            This. My Radeon 5800 is useless in some of my programs. A $100 FirePro 3800 with 1/5th the spec works great.

            • Jason181
            • 7 years ago

            Yes, comparing the two shows that you don’t know what you’re talking about. They are different in many ways, including duty cycle. If you want a card to do bitcoin mining the 7970 is fine, but not for these applications. I’m not belittling you by telling you that you really don’t know what you’re talking about. It’s a fact backed up by your comments.

            • sschaem
            • 7 years ago

            I never mention any nvidia products, so I have no idea why you keep bringing that up as an argument.

            And if you want to bring reliability. AMD doesn’t offer ECC, nvidia does. case close on that.
            Look at the HPC market.. any AMD GPU being used ? nowhere.

            My point remain, for OpenCL work this card seem to be 10x overpriced.

            And please, post the duty cycle of the s10000 and a name brand 7970.

            And I’m not sure what make you think that this card will sell to well since its market is so limited.

            • OU812
            • 7 years ago

            K20x at 1.31 DP TFlops Peak at 235 watts vs S10000 at 1.48 DP TFlops at 375 watts.

            S10000 uses 60% more power to gain only 13% peak performance. And that 13% will evaporate in real world loads because a two GPU card is less efficient that a single GPU card as data between the two GPUs has to travel over an external bus.

            The K20x is passively cooled where as the S10000 is cooled by three probable very loud fans.

            • BestJinjo
            • 7 years ago

            What if someone needs a very fast card for single-precision performance? You seem to have selectively focused on double precision. For DP, the K20/K20X makes way more sense.

            Now single precision:

            S10000 = 5.91 Tflops
            K10 = 4.58 Tflops
            K20X = 3.91 Tflops

            Different professional cards are created for different purposes.

            • OU812
            • 7 years ago

            > What if someone needs a very fast card for single-precision performance?

            Thats what the K10 is for.

            > You seem to have selectively focused on double precision.

            Well my reply was to jason181 who was comparing double precision on the K10 to the S10000. If double precision was the criteria then the K20 should have been used instead of the K10.

            • BestJinjo
            • 7 years ago

            “Thats what the K10 is for.”

            But S10000 has superior double and single precision to K10. So in essence, it does 2 things at once better than either K10 or K20. K10’s single is only 4.5 Tflops. If you get S10000, you get faster single and double precision. Did this ever occur to you? You pick and choose what you want to compare, which makes you sound biased.

            With S10000, you don’t need to compromise:
            SP = 5.91 Tflops (class leading)
            DP = 1.48 Tflops (class leading)

            As I said before, we shouldn’t only be comparing theoretical rates, but your argument is strange indeed when you continue to post power consumption but ignore that with S10000, you have the flexibility to have the best of both worlds in terms of theoretical performance.

            • Jason181
            • 7 years ago

            I was commenting on an article that had no mention of the K20. What article are you commenting on?

            • abw
            • 7 years ago

            The K20x is passively cooled did you say.??…Buy yourself a brain…..

            and read again TR article on the subject.

            Looking at its heatspreader its thermal resistance is hardly lower than 2°C/W ,
            so at full power it would reach 225 x 2 = 450°C over ambient temperature….

      • cynan
      • 7 years ago

      deleted

      • jihadjoe
      • 7 years ago

      But it’s OVER 9000!
      That alone justifies the price!

        • dpaus
        • 7 years ago

        I’ll pay $3,600 when they have one that goes to 11.

    • destroy.all.monsters
    • 7 years ago

    Clearly not enough fans.

      • OU812
      • 7 years ago

      And how LOUD is it?

      At 375 watts those fans will have to be spinning very fast.

    • BoBzeBuilder
    • 7 years ago

    MOTHER OF GOD.

    edit: Damn, I read $10000.

      • BIF
      • 7 years ago

      ROFL, me too. I almost spit up my coffee.

    • ronch
    • 7 years ago

    They should put Ruby on these things. Dress her in a lab coat or office attire or something.

    Edit – It’s fairly obvious that those who are replying to this post are men! LOL

      • MadManOriginal
      • 7 years ago

      Given the state of female ‘office attire’ these days they might as well use the same old Ruby pictures that have been around for years.

        • chµck
        • 7 years ago

        I like tight knee-length skirts :).

        • ronch
        • 7 years ago

        Yeah, but that wouldn’t justify the high price tag, would it? If you want to see Ruby wearing a lab coat inside your chassis, you’d better pony up some serious cash, I say.

          • BIF
          • 7 years ago

          $3,600 is pretty serious already, I say.

            • ronch
            • 7 years ago

            Yeah. What I meant to say was, if you’re gonna pony up $3,600, you might as well see Ruby wearing different clothes than her usual getup. Pony up an addition $1,000 and AMD just might decide to give you an S10000 with Ruby wearing nothing. LOL

            Sorry, couldn’t help it. Men will always be men. 🙂

            • willmore
            • 7 years ago

            I think you mean ‘boys will be boys’.

            • ronch
            • 7 years ago

            I’m assuming we’re men. Only SSK calls us ‘boys’.

      • dpaus
      • 7 years ago

      At that price, it better be a skin-tight leather lab coat over thigh-high boots.

      • dpaus
      • 7 years ago

      [quote<]It's fairly obvious that those who are replying to this post are men![/quote<] Yeah, where [i<]is[/i<] SSK today?

    • Hirokuzu
    • 7 years ago

    Obligatory comment about this not being compared to the K20

      • Alexko
      • 7 years ago

      It’s not available yet. I think its specifications aren’t even public, which makes a comparison impossible.

      Edit: well, now it is.

        • OU812
        • 7 years ago

        Why leave the “It’s not available yet” comment when you did the edit. Your stated it was available so why leave the opening comment in at all?

        Or is this like the “Samsung did not copy us” statement that Apple UK had to publish on their front web page. You know the one they buried at the bottom then hid it with very small font.

          • Alexko
          • 7 years ago

          Because it was true when I first posted it, and therefore a valid response to Hirokuzu’s comment.

      • Airmantharp
      • 7 years ago

      This is a professional graphics part, not a GPGPU part (does AMD make GPGPU parts?). This competes with a Quadro.

Pin It on Pinterest

Share This