Opterons fading out of the workstation market

The folks at Jon Peddie Research have published an interesting blog post that explores the disappearance of AMD CPUs from the workstation market. According to the research firm, AMD saw the most success in the dual-processor segment of the workstation market, where Opterons peaked with a slice of the pie just under 10%. That was back in 2006, and things have changed quite a bit since. In the last quarter, JPR estimated the Opteron’s share of the workstation market at a measly 0.1%.

Obviously, AMD has struggled to compete with the Nehalem microarchitecture fueling Intel’s latest workstation CPUs. The going doesn’t get much easier on the graphics front, where AMD’s FirePro graphics cards must square off against Quadros backed by the CUDA infrastructure Nvidia has built up over the years. JPR pegs Nvidia’s share of the workstation graphics market at a whopping 87%, leaving little for AMD.

The post’s author suspects that AMD’s workstation business has become less of a priority than desktops and servers, which have much higher volumes. He also contends that AMD could be holding back while it prepares a new push based on upcoming Llano designs. Bulldozer’s probably a better bet if AMD has a workstation ace up its sleeve, though.

As someone who has run dually Opteron and Athlon MP rigs over the years, I can’t help but be a little sad to see AMD’s share of the workstation space dwindle. But what constitutes a workstation has changed, too. With six-core Phenoms starting at $200 and motherboards offering gobs of second-gen PCI Express connectivity alongside support for copious amounts of memory, one can build the functional equivalent of a workstation using desktop parts that cost much less. You might not find Opterons selling in workstations from big-name system builders, but I suspect that many of AMD’s latest Phenoms have popped up in desktops designed to perform the same tasks.

Comments closed
    • ronch
    • 9 years ago

    I saw this Youtube video years ago prior to the Barcelona launch. I guess AMD kept blowing their whistle and saying Intel doesn’t have their Direct Connect Architecture (read: IMC and HT), monolithic die and L3 cache, and even going as far as to say Barcelona will beat Clovertown by 40%.

    §[<http://www.youtube.com/watch?v=G_n3wvsfq4Y<]§ Back then I was kinda disheartened to read that AMD did only so much to improve on their K8 design (let's see: Side Band Stack Optimizer, widened FPU, improved branch prediction, widened fetch, OoO Load/Store units, and perhaps a few more I can't remember anymore) and instead, during the design stage of the Barcelona, relied on their IMC and HT to keep the K10 afloat. Like Intel's Pat Gelsinger (spelling correct?) once said, you can integrate a memory controller only once, and then after that, what? AMD only made a few improvements to the K8 (while they said they made literally 100,000+ decisions to redesign the core, it simply doesn't show). They can spend the next ten years mulling over what else there is to improve and tweak on the K8/K9/K10 but in the end, you can only re-invent the wheel so many times until it just doesn't yield better performance anymore. On the bright side, although the K10.5 may not match the Nehalems of this world, it's nice to see that most likely the K10.5 is the best the original K7 architecture can ever be. It's tweaked to the very last transistor. In some cases it's interesting to see that the Phenom II can even perform on the same ballpark as the Core i7. Pretty cool considering the Phenom II is seen as a far less powerful processor. Let's hope Bulldozer will not be a repeat of Barcelona or even Llano.

    • craigb
    • 9 years ago

    My current workstation is:
    4 x 6 core AMD Opteron
    32GB RAM ECC (this removes any i7 solution from my list but not phenom)
    8 x 256GB SSD
    ATI FirePRO V8800 (or two)
    This machine is an amazing rock solid work horse for mil or film industry workstation applications.

    Applications are eventually catching up and now can even use something consumer based like CS5 on a machine like this and have great scaling.

    AMD is in no way dead in the workstation space. Their engineers are 2nd to none and can help tweak / optimise applications to give hundreds of percent increase in speed.

    Now with Intel catching up and becoming NUMA, AMD gets a free boost in properly written software. So Intel throwing cash at applications to make their hardware faster inevitably helps AMD more these days.

    My military clients arent running Intel…

    The idea that a workstation can be unreliable is complete nonsense. Then you have a desktop. A workstation is set apart in that it can work on a job for 2 weeks and not crash. When a client is pressured with a huge deadline there is nothing worse than a quick crash here and there because you actually thought that an i7 could be part of something called a workstation.

    Only Xeon parts can use ECC ram and are therefore suited to workstation / server use in a professional environment. ALL AMD cpus can use ECC which is why you can build small cheep reliable servers etc with them.

    3DLabs was the most amazing workstation card because I could get drivers re-written within days to solve a problem. nVidia dont know nor care that you exist. My contacts at ATI do care and help the client get what they need and get the optimisations they need. This sets AMD appart for me.

    Intel and nVidia are too big to care so I dont care about them and wont buy / sell their products unless I have to.

    my 20cents.

    • Usacomp2k3
    • 9 years ago

    We have a pretty good size engineering department who’s machines are 100% HP Workstations with Opteron CPU’s and Quadro graphics. I personally don’t know that it is worth it as the version of the software we have is a few years old and would fall into the “lightly multithreaded” arena. I could go to Newegg and build a machine with a core i7 and 24 GB of RAM that would mop the floor with those machines in modern apps, but probably not show a difference at all in the apps that we use.

    • wiak
    • 9 years ago

    who has a workstation that hasnt been built using desktop parts nowadays?, i guess nobody, so its not a big suprise

    its more like amd is combining workstation with their desktops heck with all the platform things they have been doing since they bought ati, it makes sense

    • shank15217
    • 9 years ago

    Wrong on both accounts, Intel can be beaten and has been beaten before multiple times and in multiple market segments (stay tuned for 2011), and CUDA is very robust otherwise people wouldn’t use it. Stop the fud train and do more research.

    • WaltC
    • 9 years ago

    I think it is interesting to note that this is a blog post as opposed to Peddie’s “official market share” estimates. This line from the blog post indicates the idiot’s paradise the blog author inhabits:

    /[

      • shank15217
      • 9 years ago

      Ok well, people who own intel parts still own them, old parts don’t make amd more money.

      • JumpingJack
      • 9 years ago

      You seem upset.

      The blogger asserts two facts, HP is dropping AMD from their workstation line and AMDs market share has fallen to almost non-existent.

      Is this a reason to call him an idiot, just because you don’t like what is happening?

    • StashTheVampede
    • 9 years ago

    What *was* a workstation class computer? Multiple cpus, gobs of ram, fast disk i/o and sometimes a pro-level graphics card.

    Multiple CPUs were solved with newer tech. You can get six CPUs off an AMD chip and eight off an Intel — not that bad. RAM was solved just over time: many Intel systems are showing 12GB of ram! Want fast disk i/o? Go SSD.

      • Krogoth
      • 9 years ago

      Those high-end CPUs are technically “workstation” parts guise as “performance/enthusiast” tier equipment. The only thing missing is ECC support on the Intel camp. It isn’t the case with AMD, since almost all of memory controllers in their CPUs have ECC support. Although, it is a different matter on the motherboard end.

    • zqw
    • 9 years ago

    FireGL (ATI) cards have historically had bad OpenGL drivers/perf for pro apps. And not much for linux support. Not sure if it’s the case today, but that’s still how they are thought of. Specviewperf and the like don’t reflect real-world performance.

      • Chrispy_
      • 9 years ago

      I just ordered another 30 quadros because the FirePro cards that shipped aren’t able to render things properly in 3DSMax or Rhino. They’re only midrange cards but no amount of tinkering with drivers fixed it. I found drivers that fixed each problem but no driver that covers both apps at the same time. FAIL, AMD….

      Quadros are just a known value, why mess around with what works in a corporate environment where every hour of downtime costs your company a significant percentage of the GPU’s purchase cost.

    • blastdoor
    • 9 years ago

    People are kidding themselves if they think that AMD walked away from the workstation market rather than the other way around. Workstation customers switched to Intel because Nehalem totally dominates Opteron on workstation workloads. AMD can partially mitigate their disadvantages in the server space by throwing more cores at the problem and taking a profit hit, but in the workstation space that strategy doesn’t work as well because the workloads tend to be more vulnerable to Amdahl’s law.

    Consider this — if Bulldozer beats Sandy Bridge for workstation workloads, do you think AMD will turn up its nose at workstation customers? Of course not — they’ll take every sale they can get.

    • Krogoth
    • 9 years ago

    Workstations = high-end systems geared for real-world workloads with a single or a few users at the helm. The key difference from a desktop is that workstations typically have the need for faster I/O, more memory capacity, ECC support, workstation-class GPUs and some kind of back-up solution which would all be beyond the budget of most “gamers”.

    Workstations aren’t disappearing. The segment is simply becoming more niche. I suspect the sluggish economy to be the real culprit here. It is harder for SMB-types and self-employed professionals to have the budget for high-end equipment. Intel platforms have the ball in this court. It is no surprise that AMD is having problems here.

      • dpaus
      • 9 years ago

      l[<"workstations typically have the need for faster I/O, more memory capacity, ECC support, workstation-class GPUs and some kind of back-up solution"<]l - Ironically, all areas where AMD's dual-socket Opterons all have an advantage over Intel's Core i7 -class products (although not the Xeons, which are more likely the CPU of choice in this segment)

        • tfp
        • 9 years ago

        Corei7 aren’t workstation parts, like you said those are Xeons.

    • ssidbroadcast
    • 9 years ago

    … I thought businesses were slow to upgrade workstation-class parts? Losing 9% marketshare over a 4-year period seems a bit much. I’m kinda skeptical of this data, but oh well…

      • mongoosesRawesome
      • 9 years ago

      they are measuring purchases every year, not what people are actually using.

        • ssidbroadcast
        • 9 years ago

        Ahhh, okay. Yeah that makes a lot more sense.

    • jdaven
    • 9 years ago

    A workstation use to mean something in the past. Highly specialized hardware for processor and graphics intensive localized tasks. The cost was very high but you could see a tangible benefit over ‘regular’ desktop components and there was more emphasis on performance rather than stability unlike the server world. In other words, you could reboot a workstation anytime without worrying about the downtime creating an impact on mission critical applications.

    With more and more cluster ‘supercomputers’ and servers performing heavy workloads and relegating computers in one’s office space to something more akin to a dumb terminal, the workstation space is becoming irrelevant. And for those companies that can’t afford such clusters, then ‘regular’ desktop computers perform the tasks at hand just fine.

    I don’t really see a problem here for AMD. If there was a problem, you would see a significant decrease in their revenue. AMD’s revenue is up but due to increased competition, their margins are crap. I never liked this distinction between desktop/workstation/server anyway. All three have products in each category that have overlapping functionality. For example, I can technically run an entry level $300 desktop as a server for file sharing.

      • smilingcrow
      • 9 years ago

      “For example, I can technically run an entry level $300 desktop as a server for file sharing.”

      I think the categories relate as much to the hardware as the software so your server isn’t classed as a SERVER.

        • jdaven
        • 9 years ago

        Unless you replace the desktop OS with a server OS. This is not too far fetched since MS sells Windows Home Server and Apple sells a server version of the Mac Mini. I think you can install other versions of MS Server on any x86 hardware. Same goes for Mac OS server (you can install it on a MBP unless someone corrects me otherwise).

          • sjl
          • 9 years ago

          §[<http://www.apple.com/server/macosx/specs.html<]§ says that the system requirements are a "Mac desktop or server with Intel processor". This implies that laptops are out. Frankly, I wouldn't even think about running Mac OS X Server on anything other than a Mac Mini or a Mac Pro (preferably the latter, as it's more expandable internally, not to mention ECC support. Yes, I know about the Xserve. No, I wouldn't bother; it's being ditched. Pity.)

            • Deanjo
            • 9 years ago

            OS X server runs fine on their portable machines. They just don’t specify it in the specs as one doesn’t typically run a server off a laptop.

    • dpaus
    • 9 years ago

    The blog post neglects to mention how much the ‘workstation’ market itself has shrunk, which could well be a key reason for AMD painful-but-strategically-correct decision to walk away from it. We used to use ‘workstations’ (Silicon Graphics FTW!!) extensively, but these days a 6-core Phenom II and almost any of their PCIe x16 graphics cards are more than enough.

    • can-a-tuna
    • 9 years ago

    Intel is a monster that cannot be beaten. On the other hand Nvidia’s CUDA is just some marketing bullocks every hardware site praises for nothing. Nvidia’s marketing/sleaze-bag engine is what is keeping them going all these years.

      • TravelMug
      • 9 years ago

      The only sleaziness from NV is in the desktop/gamer market. AMD is nowhere in the Workstation market compared to what NV is offering. That market is more about the customized/optimized driver for specific apps, the software coming with the card and yes, even CUDA and the ecosystem around it than the hardware itself.

        • Game_boy
        • 9 years ago

        AMD is competitive in profesional 3D software performance. Just look at the FirePro v Quadro benchmarks for those applications and you’ll see the 5870-analogue is marginally behind the GTX480-analogue and much cheaper.

        They caught up with their driver over the last two years I would say. Market inertia is stronger for commercial markets than desktops so it will take time for AMD to be recognised as an alternative (much as it took longer for AMD64 Opterons to be recognised even though they were clearly better than P4 Xeons). Right now the market sees Nvidia as the only safe choice for corporate buying.

            • BlackStar
            • 9 years ago

            Tomshardware? Ok, how about a review from someone who’s actually trustworthy?

          • Silus
          • 9 years ago

          Marginally behind ?

          §[<http://www.pcper.com/article.php?aid=998&type=expert&pid=1<]§ The Quadro 5000 cleans the house in almost everything, sometimes by over 50%. AMD is not competitive at all in the professional market.

            • BlackStar
            • 9 years ago

            The article you quoted concludes:

            “if we compare only the top solutions from each vendor, the AMD offering still has a price/performance advantage.
            […]
            if you are in the market for a multi-display card, there is nothing even close to the FirePro V9800 and its 6 DisplayPort outputs except maybe AMD’s own V8800 with four.”

            Reading comprehension fail, Silus.

            • Silus
            • 9 years ago

            Did you even look at the performance numbers ?

            Why bother anyway. You already made your decision. Not that you’re going to buy one of those cards, but those that do already made theirs as well and it’s usually not to get an AMD card.

            • BlackStar
            • 9 years ago

            Think again. I work on the VR sector and, yes, my workstation is running on a Quadro.

            The site you quoted regards AMD as superior in price/performance and features and gives Nvidia a slight performance advantage (read their last page). Now, gamers might buy hardware for bragging rights, but companies do *not* do that. When projecting the value of the investment against total performance, price and features, AMD comes on top *right now*.

            Keyword: right now. This might change in the future.

            • Silus
            • 9 years ago

            I guess you need to read the performance numbers again. Quadros do not have a “slight” advantage in performance. The performance difference is quite big, which is also why they are more expensive. That giving a price/performance advantage to the AMD hardware is quite relative, since a much slower part should be less expensive than the much faster one. But I’m not going to judge the review on its conclusion. The numbers are much more revealing.

            Plus, support yet again comes to mind and NVIDIA’s much better than AMD in the professional market, which also tips the scale in favor of NVIDIA’s hardware, despite the feature deficit you mentioned.

            You work for the VR sector, but did you yourself buy the hardware in your workstation ?

            • BlackStar
            • 9 years ago

            You don’t buy workstation cards out of your own pocket money, sorry to disappoint. That’s why they are called “workstation” cards after all. 🙂

            Now, the numbers show one test suite where Nvidia dominates (SPECviewperf 11) and two test suites that find the AMD offering faster (CineBench 11 and 3dMark Vantage). The tests’ conclusion remarks:

            “Since then NVIDIA has released their Quadro update based on the Fermi architecture and has taken back some of the spotlight; AMD still has the edge in features but the performance edge probably leans towards NVIDIA for now.”

            Things are not nearly as one-sided as you are saying.

      • Silus
      • 9 years ago

      You don’t even seem to know what CUDA is and what it offers, to make people praise it. So your “argument” (if that AMD fanboy dribble can be called an argument) is that companies that want to “speed up” their research by using GPUs, are not actually getting any advantage out of it and just praise it to please NVIDIA’s marketing…

      And speaking of sleaze bag marketing, how’s your AF quality by the way ? Is AMD telling you that they are not decreasing quality, just to get a couple more fps ?

        • BlackStar
        • 9 years ago

        Please enlighten us. Why should we lock ourselves in with CUDA and not use OpenCL which works everywhere?

        AF quality is just fine, thank you very much.

          • Silus
          • 9 years ago

          Hehe, spoken like someone that never used neither! Typical…

          Do you understand how limited OpenCL is right now, when compared to CUDA, because it’s still in it’s infancy ? Not to mention the existing huge support for CUDA, which OpenCL lacks in comparison.
          You have to realize that being a zealot for a certain company, doesn’t apply when there are millions of dollars invested in some research project, that reaps benefits from existing CUDA apps and CUDA capable hardware. And the companies/people that invested those millions are not going to leave that and place their bets on something that still has a lot to do to catch up. And of course, you forgot to say (or simply don’t know). CUDA doesn’t lock you out of anything. Anything CUDA can be ported to DirectCompute/OpenCL/whatever. And if OpenCL does catch up, I’m sure that’s what NVIDIA will do.

            • Deanjo
            • 9 years ago

            Agreed, openCL is still in it’s infancy. There are other factors as well such as documentation, available tools, bindings, etc. Not to mention that even though “portable” openCL code still requires device specific coding to optimize it’s performance. There is also the issue that usually HPC applications are written for the specific hardware at hand, which, like it or not, is dominated by Nvidia’s offerings in the HPC crowd so there is little motivation to move to an API which requires more work to implement and still suffering from teething pains. openCL will probably have more of an impact on consumer level applications then it will in the HPC market.

            • BlackStar
            • 9 years ago

            That’s true for any highly-specialized code. You need to optimize differently for different hardware to achieve optimal performance, be it Atom vs i7, Ati 6870 vs Nvidia 580 or Nvidia OpenCL vs AMD OpenCL vs Intel OpenCL. It’s natural.

            I can understand using CUDA for software with an established CUDA codebase. I can understand using CUDA because the developers have gained expertise on CUDA tools and development practices.

            However, CUDA for new software is making less and less sense, given the advent of cross-platform, supported alternatives. Every major hardware company (CPU & GPU) and operating system now offers OpenCL – CUDA is on its way out.

            @Silus: you are wrong, I have worked with OpenCL ever since AMD’s CPU drivers appeared. The tooling used to be inferior to CUDA at first but this is no longer the case.

            • Silus
            • 9 years ago

            No, CUDA is not on it’s way out simply because OpenCL will run over it.

            OpenCL is nothing more than an abstraction layer. A standard to “communicate” with the GPU. OpenCL instructions still need to be translated to whatever the GPU natively understands: CUDA, Brook+, etc. This is done in drivers. You should know this by now…if you really worked with it.

            And what tools are you talking about ? CUDA is about versatility and the C like instructions make it easier to learn. OpenCL can’t compare with this, nor is it trying to. It’s just a standard for GPU instructions. But that’s why OpenCL is in it’s infancy, because there’s much you can’t do yet on all GPUs and you can’t take advantage of all the architecture specific enhancements. It will eventually get there (hopefully), but there’s still much to do.

            • BlackStar
            • 9 years ago

            The reason why CUDA will fail is because it is backed by a single company (Nvidia). Contrast with OpenCL which is backed by Nvidia, AMD, Intel, IBM and Apple.

            On you other points:

            1. OpenCL and CUDA are both abstraction layers, exactly like GLSL, HLSL or the C programming language are. Your OpenCL drivers translate your OpenCL code into a binary that gets executed on the GPU. How your code gets there is irrelevant – one vendor might generate Gallium3D IR, another might translate to CUDA but these are vendor-specific implementation details that are *not* dictated by the OpenCL standard.

            2. OpenCL supports extensions. You are perfectly able to use hardware-specific features that are not exposed in the core specification. CUDA doesn’t have an advantage here.

            3. OpenCL is not a standard for GPU instructions. There *is* no standard for GPU instructions, as each vendor uses vastly different processor designs. As you said, OpenCL is an abstraction layer, just like CUDA.

            The only advantages of CUDA are better tooling and legacy codebases. Both are transient and will evaporate as time passes.

          • Amorphous
          • 9 years ago

          You don’t understand what CUDA is if you’re arguing CUDA vs OpenCL.

          CUDA is the engine in the GPU that allows NVIDIA GPUs to run OpenCL, C++ for CUDA, C for CUDA, DirectCompute, and the whole rest of the list. Without CUDA, NVIDIA couldn’t run OpenCL. In the same way, without Stream, AMD couldn’t run OpenCL.

          CUDA is superior to Stream in many ways. CUDA is way more flexible. You have many code options and professional tools to assist with application development. AMD user’s choices of code for Stream are very limited, you get OpenCL, DirectCompute and a heavily modified version of a fairly obscure language.

          NVIDIA is closely associated with Khronos, the group that created OpenCL, and has a longer history of OpenCL support than AMD does.

          NVIDIA has better developmental and developer support than AMD does. This could change, but it hasn’t for the past 5 or 10 years, so I wouldn’t expect it to.

            • BlackStar
            • 9 years ago

            The fact that Nvidia based their OpenCL compiler on CUDA is *irrelevant*. It’s an *implementation* detail. They’d be able to support OpenCL without CUDA just fine – it just made sense to cut costs by reusing their existing technology stack, just like they did with GLSL-on-top-of-Cg back in the day.

            How difficult can this be to grasp?

            The mere fact that Nvidia is one of the core designers of OpenCL should hint at the fact that CUDA will slowly fade away. OpenCL has simply gained too much momentum for that not to happen.

            • Amorphous
            • 9 years ago

            CUDA is a compute engine. Not a compiler. Without CUDA, there is no compute, in any language, on NVIDIA GPUs.

            • tigen
            • 9 years ago

            It’s not an engine. It’s a marketing term and an API. That API is what is going to become irrelevant in the long run.

            • Amorphous
            • 9 years ago

            DirectCompute is an API, OpenCL is an API. CUDA is parallel processing engine. Those APIs run on CUDA, or Stream (ATI’s equivalent). Go read some whitepapers: §[<http://developer.download.nvidia.com/compute/cuda/docs/CUDA_Architecture_Overview.pdf<]§ While there are compilers specifically for CUDA, CUDA isn't a compiler.

    • shank15217
    • 9 years ago

    I have to agree, the workstation market is slowly disappearing and being replaced with high end multi core desktops. CUDA is great but I’m not sure how much it helps in the workstation world. SSDs and low cost jbods have also brought much of the performance benefits to desktops. One important thing is Intel high end platforms have 6 memory slots where as AMD has only 4, this allows Intel based high end desktops to easily pack 24GB of ram in a single socket system, making them more desirable. Workstation workloads are usually memory limited and i/o limited as gfx cards take a lot of load away from cpus.

      • AlvinTheNerd
      • 9 years ago

      G34 Opteron systems have quad channel memory and can pack 32G of unbuffered memory per socket. By your argument, G34 systems should be the king of workstations.

        • shank15217
        • 9 years ago

        As they should i guess, they support 12 cores as well. AMD should push it into that space. Only problem is those opterons are low clock speed and therefor have bad lightly threaded performance.

          • Deanjo
          • 9 years ago

          The applications that are typically ran on “workstation class” machines are not what you would consider “lightly threaded”.

        • chip
        • 9 years ago

        G34 + 12 core Opterons still offer excellent price/performace/power combination for a range of workstation tasks. I’ve just built a 48 core deskside Opteron machine for ~65% the price of an equivalent Intel machine, and the power use is lower too. For the kind of tasks it’s used for (phylogenetic inference) it’s ideal.

Pin It on Pinterest

Share This