Intel releases the final round of Itanium CPUs

It's been seven years since we last reported on the Itanium line of processors, but it appears that article just wasn't destined to be our last on the subject. Intel has just added four new processors to its Itanium family. Now, this article does have a chance to be our last report on the ill-fated CPU lineup, since Intel confirmed to PCWorld that these processors will be its final Itanium chips.

Processor Cores/threads Speed L3 cache TDP
Itanium 9720 4/8 1.73 GHz 20 MB 130 W
Itanium 9740 8/16 2.13 GHz 24 MB 170 W
Itanium 9750 4/8 2.53 GHz 32 MB 170 W
Itanium 9760 8/16 2.66 GHz 32 MB 170 W

The new Itanium 9700-series processors are built on Intel's somewhat long-in-the-tooth 32-nm process. There are four variants: the quad-core Itanium 9720 and 9750, and the octa-core Itanium 9740 and 9760. All models feature Intel's hyper-threading, Turbo Boost, and virtualization technologies, and have a relatively high TDP. The three higher-end models have a 170W TDP, while the lightly-clocked Itanium 9270 has a 130W TDP.

Intel originally launched the Itanium family way back in 2001 with grandiose plans for making it the successor to the x86 architecture, but those plans never quite materialized. While the associated instruction set was ambitious and there certainly was demand for 64-bit support, upgrading to Itanium involved both new hardware and a new ecosystem of software from a variety of companies. AMD's 64-bit extensions to the x86 instruction set proved to be much easier to adopt, seeing as even Intel ended up cross-licensing them for its Pentium 4, Xeon, and Core processors.

The primary destination for the Itanium processors will likely be Hewlett-Packard Enterprise's Integrity servers, which have been powered by Itanium CPUs for many years. Intel no doubt wants to avoid the legal troubles Oracle faced when it tried to stop supporting these servers in 2012. The new Itanium processors, then, appear to be one last round of chips made to satisfy a contract with HPE. We'd love to say that Itanium will be missed, but we very much doubt that's the case.

Comments closed
    • tygrus
    • 3 years ago

    Looks like a small stepping update to previous Poulson version. Same cores, same cache, same RAM, same instructions, same 32nm with minor improvements, 5% speed boost. Not what Kittson could have been if they used 14nm, updated memory controllers (more like Xeon E7) and other improvements (but no $$$ for R&D for EOL).

    • ludi
    • 3 years ago

    I wonder who will actually buy these, besides possibly Paul DeMone. Wonder what happened to that guy.

    • alrey
    • 3 years ago

    I still remenber the intel seminar I attended way back around 2002. They were saying that the 32 bit CPUs will likely peak at 800MHz and Itanium is the answer to bring out more powerful CPUs.

    • ronch
    • 3 years ago

    If you think about it, if it weren’t for AMD64 Itanium probably would’ve seen much better adoption or maybe some other competing ISA could’ve surfaced.

      • NTMBK
      • 3 years ago

      Thank god for AMD.

      • Klimax
      • 3 years ago

      Unlikely. Too much of dependency on compiler for scheduling.

      • eofpi
      • 3 years ago

      Some other ISA would’ve surfaced. VLIW is just too inherently limited to succeed on a CPU.

    • the
    • 3 years ago

    What, no titanium branding to match the coming Xeons?

      • NTMBK
      • 3 years ago

      Itanic would be more appropriate

      • CuttinHobo
      • 3 years ago

      ‘Poop’isn’t a precious metal.

      *Ba-dum-bum*

    • Kretschmer
    • 3 years ago

    I would love to see 2017 Itanium benchmarks, TR!

    Also, how depressing would it be to work on Itanium in 2017?

      • NTMBK
      • 3 years ago

      Given the kind of customers that still need Itanium (and probably will for the next 10 years), I bet it’s a lucrative niche to be in. Kind of like maintaining COBOL.

        • davidbowser
        • 3 years ago

        I had a conversation with some HPUX engineers about 7-8 years ago and they told me that there were several active and vocal internal pushes to kill HPUX and migrate customers to Linux, but customers simply refused. They told me they had customers still running OpenVMS (and I actually met one in the pharm/biomed industry) and several more running Tru64.

        HP was put in a position by customers to either run a breakeven (or at a loss) business unit to keep the customers from bolting from their x86 servers/pcs, enterprise software, and professional services businesses. It was a losing battle for them for over 10 years, and the company got broken up anyway.

          • johnrreagan
          • 3 years ago

          I’m one of the compiler engineers in the OpenVMS group. I’ve worked on VAX, Alpha, and Itanium code generators. OpenVMS has been licensed to VMS Software Incorporated. We are supporting the latest Itaniums and will support OpenVMS on these final chips as well. In addition, OpenVMS being ported to x86-64. There are still lots of active customers using OpenVMS in many different computing environments.

          With regards to HP-UX and NonStop, both of those OSes are big-endian. Moving to a little-endian environment can be disturbing to many customers without additional software support. For example, the NonStop offering on x86-64 effectively presents a big-endian model by byte-swapping most data to/from memory. The NonStop environment is unique enough that the benefits outweigh the extra overhead (plus the x86 product uses a much faster Infiniband fabric)

            • davidbowser
            • 3 years ago

            My question to an expert, like yourself, would be

            What is the level of effort to recompile/replatform apps from Itanium to x86-64 on OpenVMS?

            The last code I wrote in VMS was Fortran, but I am guessing that many of the business and science applications may have been written in the mid-90s, so my frame of reference might not be all that far off.

            • johnrreagan
            • 3 years ago

            Languages like COBOL, Fortran, Pascal, etc. tend not to have many hardware dependencies in them. They DO have OS dependencies like system calls, reliance on OS features, reliance on language extensions, etc. Those language test systems are full of programs from the mid-90s (and earlier) and work just fine on Alpha and Itanium (and should continue to work just fine on x86-64).

            When we ported OpenVMS first from VAX to Alpha and then Alpha to Itanium, almost all of the programs were “recompile and go”. We even provide a VAX Macro-32 compiler to allow old VAX code to work on Alpha and Itanium (with some restrictions). We expect the same for x86-64 (we’re still working on the cross-compilers with 1st boot planned for later this year).

            For something like C, it depends on if you used embedded asm()s or whether you cheat and know how to pick apart the argument lists. However, if you used <stdarg.h> (or even <varargs.h>), those should continue to work.

            For Alpha and Itanium, we used our own proprietary multi-language/multi-target code generator (think gcc or LLVM before there were things like github and “open source”). For x86-64, we are writing a converter between our proprietary interface to LLVM’s IR. That essentially gets us most of those compilers with little work in the frontends (there are some interesting issues but I won’t hijack the thread for that).

            For Itanium-in-specific, the current Poulson chips added several new instructions to the Itanium architecture (there is now a 32-bit integer multiply). The underlying implementation is now out-of-order and feeds the pipeline with twice as many instructions per cycle (12, up from 6). A few more functional units, but a few fewer memory units. It did provide a significant speedup to existing images. Of course, as has been discussed about VLIW in general, it is often difficult for the compiler to get it right. The move to out-of-order lets the chip cover up for sloppy/lazy compilers but that came too late to save things.

            You do run the risk of exposing latent compiler bugs when the newer chips start doing more things in parallel but that kinda risk is always there on any architecture. So far, we haven’t bumped into those sort of things. Since Kittson seems like just a speedup, the compilers probably don’t care.

            The OS might care if the UEFI or ACPI needs updating or if the system motherboards might provide newer hardware support (think: USB3.1 or faster NICs) than what was previously part of the system.

    • LauRoman
    • 3 years ago

    Holy effin power draw Batman.

    • Kougar
    • 3 years ago

    Soon to become unobtanium

    • blastdoor
    • 3 years ago

    Itanium was arguably the most successful marketing effort in Intel’s history. Except for IBM, all of the RISC guys self-immolated before Itanium was launched, having been so thoroughly terrified by Intel’s marketing hype. By the time they realized Itanium was a Potemkin village, it was too late.

    Sad!

      • chuckula
      • 3 years ago

      It was a combination of fear over Itanium, which wasn’t overly justified, coupled with the explosive growth of x86 and clustering, which was the real reason that most of those architectures aren’t that popular anymore.

        • blastdoor
        • 3 years ago

        I think it’s unfortunate that Alpha died.

          • chuckula
          • 3 years ago

          AMD isn’t.

          If it wasn’t for Alpha dying they couldn’t have scooped up all the engineers who effectively put Alpha’s designs into an x86 body for the Athlon.

            • blastdoor
            • 3 years ago

            Meh.

            Alpha was partially resurrected only to die again.

            I suppose one could also argue that the ghost of Alpha inhabits Apple’s A-SOCs, too.

            But I’d like to see the parallel universe in which DEC was able to turn Alpha into a successful product. They had envisioned it would be a design that could be extended for 25 years. We are nearing the end of that timeframe… it would be cool to see what it would look like today.

            • Beahmont
            • 3 years ago

            I believe we did though. Alpha from what I know, which is admittedly just a broad overview, was based around the same logic that Netburst and Bulldozer were based on. Back before we hit the clock wall, there were essentially two ideals of how to gain performance: Moderate to low IPC and crazy clock speeds and moderate to high IPC with moderate to low clock speeds. Alpha like Netburst and Bulldozer that came later, chose to go down the path of moderate IPC and crazy clock speeds.

            As a practical matter, every µArch that has tried to go down this path has ran into the problem that somethings just won’t parallelize well or at all and that the foundry processes will never allow you to hit the clock speeds you need to make your single thread throughput work in a manner that makes you competitive with µArches that took the other path.

            • AnotherReader
            • 3 years ago

            You are conflating microarchitecture and ISA. The first couple of Alphas, the 21064 and the 21164, were in-order speed demons, but the 21264 was a brainiac like the Athlon or the Pentium III. The EV8 would have been a monstrous CPU capable of making Core 2 weep and it would have introduced load store disambiguation many years before Intel did so.

            [Edit]: EV8 was supposed to be followed by [url=http://lagunita.stanford.edu/c4x/Engineering/CS316/asset/tarantula.pdf<]Tarantula[/url<] which would have been capable of 32 DP flops/cycle. That is a rate greater than every common CPU except the Skylake Xeons.

            • Klimax
            • 3 years ago

            Theoretically capable of making Core2 weep. Theory/on-paper and practice are very different things.

            • chuckula
            • 3 years ago

            On paper Itanium made the Core 2 weep too.

            • eofpi
            • 3 years ago

            AMD got the engineers; Intel got the IP. One of these was useful.

      • Anonymous Coward
      • 3 years ago

      See also: Sparc.

      • the
      • 3 years ago

      Itanium killed off three major architectures before its release. Ironically, the before its release part only applies due to the sheer number of delays the first chips had, nearly two full years if I recall correcty. Had it been on time, things would have played out differently. In fact, it might have survived as had Intel been able to keep up with their original release schedule, Itanium would have actually been competitive with RISC chips in that time frame.

      PA-RISC was deprecated well in advance of Itanium’s release HP, who partnered with Intel on Itanium also produced PA-RISC. This one was well planned and PA-RISC users had a clear on-ramp strategy to Itanium hardware.

      MIPs was another platform that was being deprecated before Itanium was even released. SGI was in charge of it for developing high performance chips for their workstations. However, SGI didn’t have the resources for continued investment to keep it competitive in the workstation/server space. Thus SGI sold off its investments which left MIPS to pursue the embedded space.

      While Compaq was mildly interested in Itanium, they were still planning on offering their Alpha architecture. It wasn’t until HP was purchasing Compaq that their attitude changed. To get the aquisition through regulators, Compaq spun of Alpha as a separate company which was bought by Intel after the dust settled after the Compaq/HP merger. The thing here is that this happened after the release of the first Itanium with Alpha retaining performance leader ship. This left Alpha customers asking why they should replace their systems with something that was underperforming.

    • Krogoth
    • 3 years ago

    This is purely an upgrade for those who were long-term investors in the Itanium platform.

    IA64 is a very interesting microarchitecture but it was a victim of being too little, too late. It was meant to go up against DEC and IBM back in the day but that market was almost dead by the time Itanium because commercially available. It end-up competing against its x86 sibling and HPC shifted from old-fanged big iron boxes to clusters.

      • NTMBK
      • 3 years ago

      If by “very interesting” you mean “a bloody terrible idea”, sure, it was very interesting.

      Don’t rely on magical compiler tricks to make your architecture efficient.

        • chuckula
        • 3 years ago

        Itanium is exactly what happens when Intel does what everybody says they want Intel to do: Drop all that crufty x86 baggage and use all the best ideas from advanced research to build new hyper-awesome CPU!

        Intel did exactly that. Hell, they even worked directly with a major big-iron vendor HP who presumably knew exactly what big-iron customers actually wanted in a next generation chip.

        The results weren’t pretty though, even if it dumped the x86 cruft.

        Sure it sucks now, but it’s easy to say that something sucks after 15 years of experience with the finished product.

          • Anonymous Coward
          • 3 years ago

          Is Itanium proof that excessive resources ruin projects? It seems like things go best when there is an inspired vision and not just a room full of people who mean well.

          • NTMBK
          • 3 years ago

          Itanium came from the same era as Netburst, which didn’t dump x86 and [i<]still[/i<] sucked. I think Intel just kept having bad ideas in that era.

            • srg86
            • 3 years ago

            Itanium was way older than Netburst. Intel had been working on what became Itanium since the early 90s.

            I still love Bob Collwel’s quote that Intel had originally planned to replace x86 with IA-64 by 1997.

            • BurntMyBacon
            • 3 years ago

            While it did come from the same era as Netburst, it started development long before Netburst and it was practically the opposite from a high level. Netburst used extremely deep (and narrow) pipelines with short stages to increase the frequency at which the processor could run. It was like an assembly line where no stage gets much done but it gets it done quickly. After 28 or 31 stages, you get the job done.

            EPIC on the other hand was extremely wide and comparatively shallow. It even went a step beyond getting rid of the legacy x86 decode stage and got rid of the scheduler as well. This placed the burden of scheduling on the compiler at compile time allowing the processor to focus on more important tasks at run time. In hind sight, this means that the performance of the processor was in large dependent on the proficiency of the programmer’s and/or compiler’s ability to optimize the scheduling. Furthermore, it increased the cost of development for the platform for any application where performance was desired. Finally, the architecture required programmers to code less sequentially and more in parallel to really bring out its performance. A task that programmers today (15 years later) still resist.

            While in hindsight, there were some bad decisions with EPIC, I think the larger issue is the timing of the architecture. That said, I’m not sure there was a good launch time as if you launch too early, nobody knows how to program for it. You launch too late and they are already shifting the parallel code to GPUs. Also, there was the 64 bit push to consider. Perhaps the architecture did have its best shot when people were demanding a new (64 bit) architecture anyways.

          • snowMAN
          • 3 years ago

          Itanium wasn’t ever awesome, the whole premise on which the architecture is based is deeply flawed, which is why performance sucks in most use cases.

        • srg86
        • 3 years ago

        And Intel managed to make the same VLIW needing “magical compiler tricks” mistake twice.

        First with i860 and then with IA-64.

        • Krogoth
        • 3 years ago

        IA64 was being drafted back when 486 was new and P5 in the a prototype stages while Intel was still drafting up P6. They didn’t expect the x86 ISA would endure so long.

    • chuckula
    • 3 years ago

    Many many many moons ago I actually was an Itanium user (albeit indirectly) since SGI required you to purchase Itanium based metadata controller nodes for their CXFS filesystem.

    I never actually used the Itanium nodes directly, but CXFS was pretty robust for its time, although there are far better options available nowadays.

      • nanoflower
      • 3 years ago

      Never worked with CXFS but XFS was a great file system for its time. Many great ideas were implemented in that filesystem that have now spread into others. That was from the days when SGI was a great company with big ideas.

        • chuckula
        • 3 years ago

        XFS is still alive (and actually pretty well maintained and updated) in Linux to this day.

          • atari030
          • 3 years ago

          As it’s the default filesystem, supplanting the EXT* series for RHEL7 (and some other distros), I’d sure say so, yep!

    • tipoo
    • 3 years ago

    Is Itanium as a revenue source still single handedly bigger than AMDs entire cashflow? Or has that strange fact ebbed away by now?

      • chuckula
      • 3 years ago

      That’s over although it was true for a while.

    • chuckula
    • 3 years ago

    RyZen has claimed its first victim.

    Your so-called “Skylake” Xeons are next Intel! Just you wait!

      • Anonymous Coward
      • 3 years ago

      No no… Intel launched these in response.

        • chuckula
        • 3 years ago

        Obligatory: You’re so vain. You probably think this launch is about you.

          • Anonymous Coward
          • 3 years ago

          I’m still working on understanding the situations where that meme can be applied. Also the situations where it [i<]must[/i<] be applied.

      • raddude9
      • 3 years ago

      Why do you cr@p on AMD at every possible opportunity?

        • chuckula
        • 3 years ago

        Why do you have such a problem with AMD that you get upset when I agree with their marketing department that their 32 core parts will easily destroy those crappy Xeons?

        You DO realize that AMD is launching 16 core Naples megatasking parts at the end of the month that will destroy all Intel parts right? I mean, we know for a fact that 22 Naples cores are 2X faster than 22 Broadfail cores, so the 16 Core Naples parts — which are of course only going to cost $599 or so — will easily be 45% faster than Skylake. Oh and that’s BEFORE you overclock them!

        Why do you have such little faith in AMD? Didn’t you read TR’s review guide and upthumb Bensam123? WHY DO YOU HATE AMD SO MUCH!

          • raddude9
          • 3 years ago

          AMD have not announced a 16-core Ryzen, and although I sometimes read leaks and speculation I rarely bother to comment about unreleased products.

          I find your thinly veiled sarcasm curious though. Now, I know you have avoided this question at every turn, sometimes skillfully deflecting the conversation and sometimes bluntly ignoring it…

          What is your interest in Intel?

            • cygnus1
            • 3 years ago

            Ryzen =/= Naples.
            Ryzen(zen)
            Naples(zen)

            AMD has talked about Naples (aka new Opteron) with up to 32 cores I believe.

            • raddude9
            • 3 years ago

            current Ryzen =/= Naples =/= 16-core Zen based CPU

            Chuckula was deflecting the question about his Intel interests by commenting about the speculated 16-core Zen based CPU (This chip may be “Ryzen” branded or it may not). The speculation/leaks seem to indicate that this chip will be something in-between an 8-core Ryzen and a 32-core Naples, with quad-channel memory and two 8-core dies in an MCM.

            • cygnus1
            • 3 years ago

            I wouldn’t be surprised if there end up being 16 core Naples model released. The 32 core just being the highest count version of many models. Might even be like Xeons where the lower core count models get the higher frequencies.

        • raddude9
        • 3 years ago
      • Kretschmer
      • 3 years ago

      Ryzen and Itanium do not compete. I’d imagine that Itanium is only refreshed due to contractual obligations to IA64 customers, at this point.

      • cygnus1
      • 3 years ago

      I don’t think these guys replying to you get the joke… good troll 😉

      • tipoo
      • 3 years ago

      You’re so vain you probably think Itanium is about you!

Pin It on Pinterest

Share This