Intel unleashes quad-core Itanium 9300

Although the mass of both current and upcoming Nehalem-based Xeons might suggest otherwise, Intel still hasn’t killed its Itanium processor line—far from it. Just yesterday, the company introduced five Itanium 9300 processors based on the brand-new Tukwila architecture.

Tukwila features four cores, eight threads, 24MB of L3 cache, Turbo Boost, QuickPath interconnects, second-gen hardware virtualization tech, and some new reliability features, all laid out on a massive, two-billion-transistor die. Intel claims the new Itanium 9300 series more than doubles performance over previous-gen dual-core Itaniums, bringing eight times the interconnect bandwidth, five times the memory bandwidth, and seven times the memory capacity with “industry standard DDR3 components.”

The Itanium 9300 lineup looks like so:

Processor Cores/

threads

Speed Turbo speed L3 cache QPI TDP Price
Itanium 9350 4/8 1.73 GHz 1.86 GHz 24 MB 4.8 GT/s 185 W $3,838
Itanium 9340 4/8 1.60 GHz 1.73 GHz 20 MB 4.8 GT/s 185 W  $2,059
Itanium 9330 4/8 1.46 GHz 1.60 GHz 20 MB  4.8 GT/s 155 W  $2,059
Itanium 9320 4/8 1.33 GHz 1.46 GHz 16 MB  4.8 GT/s 155 W  $1,614
Itanium 9310 2/4 1.60 GHz N/A 10 MB  4.8 GT/s 130 W  $946

Interestingly, Intel says these processors share a number of platform features with upcoming eight-core Nehalem EX processors, including “the Intel® QuickPath Interconnect, the Intel Scalable Memory Interconnect, the Intel® 7500 Scalable Memory Buffer (to take advantage of industry standard DDR3 memory), and I/O hub (Intel® 7500 chipset).”

As we wrote last May, Nehalem-EX will also have a formidable transistor count (2.3 billion), which it will spend most notably on eight cores, 16 threads, 24MB of shared L3 cache, and a total of four memory channels. Server makers will be able to arrange Nehalem-EX CPUs in eight-socket configurations for a total of 64 cores and 128 threads. Unlike Tukwila, however, Nehalem-EX will be based on the same base microarchitecture as Intel’s desktop processors. Itaniums still have their own, separate instruction set, and Intel targets them at different markets.

A first-gen Atom chip resting on a Tukwila die. Source: Intel.

Servers featuring new Itanium 9300-based processors should start shipping within the next three months, according to Intel.

Comments closed
    • AMDisDEAD
    • 10 years ago

    LOL, Intel has completely trashed HyperTransport. No, I thinl it was actually AMD who trashed HT. LOL.

      • OneArmedScissor
      • 10 years ago

      LOL UR LATE LOL, LOL.

    • just brew it!
    • 10 years ago

    “I’m not dead yet!!!”

    </monty python>

    • danazar
    • 10 years ago

    Now Intel is getting very close to achieving what will finally give them entry to the market: The ability to build a dual-CPU machine with an x86 and an Itanium CPU in the same system and on the same interconnect. Legacy x86 programming lets people keep using their existing software from Day One while adopting Itanium-based software as it comes out. Once enough of the software is Itanium-based they can yank the x86 CPU and put in another Itanium CPU and increase performance further.

    This might finally give Intel enough leverage to start pushing Itanium out more widely. They need to put more Itanium chips out there so more people are willing to write more code for them. It could finally help solve the chicken-and-egg problem.

      • Scrotos
      • 10 years ago

      Buahahahahahaha!

      What’s more likely, that? Or will people instead leverage die-integrated GPUs for lightweight HPC-type computing? It’s a nice idea, don’t get me wrong, but I think that’s got no chance in the universe of coming to pass.

      • StashTheVampede
      • 10 years ago

      Itanium is nearly dead and buried. The 10 customers of Itanium may upgrade.

    • mrtintin
    • 10 years ago

    I agree with #1. POWER7 is much more interesting than this slow, late processor.

    Much more details here:
    §[<http://www.theregister.co.uk/2010/02/08/ibm_power7_systems_launch/<]§ Also, performance on the industry standard SPECInt_rate and Specfp_rate_2006 is great, beating the highest end Nehalem Xeons easily. 8 core / 1 CPU 3.86Ghz POWER7 - 326/293 4 core / 1 CPU 4.14Ghz POWER7 - 185/165 4 core / 1 CPU 3.33Ghz Nehalem - 140/108 There have have no benchmark results released so far for the Itanium 9300 processor - I wonder why?

      • tygrus
      • 10 years ago

      My guess for SPECInt_rate and Specfp_rate_2006
      8 core / 2 CPU 1.86Ghz Itanium 93xx – 105/130

    • Shining Arcanine
    • 10 years ago

    Are these drop in replacements for Xeons like Intel promised they would be?

    Also, does anyone know exactly what people run on these processors and how they perform in terms of gigaflops?

    Edit: Doing some math, these can do up to 6 * 1.83 * 4 = 43.92 gigaflops. Nehalem EX at 3GHz has 4 * 3 * 8 = 96 gigaflops. Can someone explain why anyone would be willing to buy these? It cannot possibly be for their performance, which is theoretically awful and almost certainly worse than the Nehalem-EX.

      • Scrotos
      • 10 years ago

      All the flops in the world won’t save you (just ask Alpha!) if no one wants to port to your platform.

      In this case people who use Itanium have already spent millions if not more migrating all their HPC and mission-critical apps from PA-RISC, MIPS, Alpha, etc. over to Itanium over the span of a decade. Itanium scales to larger systems and has more reliability built in for big iron types of applications.

      So you’re asking people to once again move their highly-optimized code to another ISA, maybe another OS, and invest in a whole new ecosystem of hardware. Is that cost-effective?

      • just brew it!
      • 10 years ago

      Not drop-in (AFAIK); but they are supposed to share a lot of infrastructure (chipsets, etc.) with future Xeons.

      I’m not sure how much sense drop-in replacement would make anyway, since Itanium is completely incompatible with x86 at the software level.

    • ClickClick5
    • 10 years ago

    185 TDP…..

    It is the P4 all over again!
    Lads, break out the liquid cooling again.

      • Meadows
      • 10 years ago

      What lads, nobody is using this.

        • Scrotos
        • 10 years ago

        I dunno, maybe he thinks these go in desktops or gaming rigs or something?

    • Mikael33
    • 10 years ago

    But how many FPS does it get in Crysis?

    • DrDillyBar
    • 10 years ago

    My Q9300 is jealous.

    • swaaye
    • 10 years ago

    Has anyone here personally used an Itanium box? What for? Was it thrilling? 😀

      • swaaye
      • 10 years ago

      bump 😀 !!!!

    • Ryu Connor
    • 10 years ago

    Even with the higher price of the Itanium chips, it’s my understanding that the volume of sales is still too small to amortize the cost of the process technology it is applied to.

    So the choice for 65nm for a niche product seems to make sense. I’d gather that the beancounters consider that x86 has sufficiently moved enough volume to pay for the process. So now these high cost, low volume chips, represent more gain than risk.

    Bob Colwell had mentioned that Merced maxed the reticle size. With the size of Tukwila, I wonder if they aren’t doing that yet again.

    • Goty
    • 10 years ago

    Meh. Very few people care about Itanium any more, even in the HPC space it’s supposed to serve.

    • BoBzeBuilder
    • 10 years ago

    Speaking of processors, where’s my mammoth CPU review Damage?

    • Krogoth
    • 10 years ago

    Itanium platform is at a dead end.

    These chips are only for existing Itanium customers. The cost of switching software platforms is still greater than hardware costs.

    Nehalem platform completely destroyed Itanium’s viability in server/workstation arena. The upcoming GPGPUs are going to kill Itanium in the number-crunching, big iron arena.

    • StashTheVampede
    • 10 years ago

    Itanium *was* Intel’s future for 64bit computing. It never took off without a flavor of consumer Windows and all the necessary apps to get it going. Sure, the x86 emulation was nice, but even Intel ditched it because no one used it.

    Intel went gung-ho on x86-64 once they smelled Itanium flopping.

      • d0g_p00p
      • 10 years ago

      I thought it was AMD’s 64-bit tech that threw a wrench (Hammer?) in Intel’s plans for the 64 bit realm with Itaium?

        • just brew it!
        • 10 years ago

        You’re both right. AMD gave people who actually needed 64-bit a reasonably priced alternative, and Intel had to follow suit and support AMD’s instruction set once x86-64 started to gain significant market share.

      • tfp
      • 10 years ago

      Yeah I’m sure consumers were going to shell out the 1-3.5K per Itanium CPU, not even counting the MB or anything else.

        • Anonymous Coward
        • 10 years ago

        Handily, there is nothing about IA64 that prevents it from being implemented in a cheap processor that works on cheap motherboards. One would expect such a product to fail on the market, of course.

      • djgandy
      • 10 years ago

      Yeah, because we all needed huge caches and parallelism in Desktop PC’s back in 2004 along with a completely new ISA with no mainstream software support.

      Itanium was never going to be a desktop product.

        • StashTheVampede
        • 10 years ago

        It wasn’t that Itanium was going to be a desktop product: it was Intel’s vision of what 64bit computing would be. Intel was touting Itanium as the “real” 64bit future and AMD was shipping their chips to desktops and servers.

        Intel was way ahead of the time with their vision. I’m also curious as to why these don’t fit into Xeon sockets (also mentioned long ago when Intel saw their Itanium orders slowing down).

          • Krogoth
          • 10 years ago

          IA-64 was an architecture that was designed in a era where big irons were still king at massively computing.

          Clustering and cheap, yet powerful server/rendering boxes ate away at big irons. By the time, the first generation of Itanium arrived at public channels. Big Irons were a vastly diminished niche. That niche continued to shrink when “mainstream” chips and clusters became more powerful and cost-effective for massive computing. Just take a long at current list of TOP100. Almost all of them are clusters consisting of “mainstream and server-grade” chips.

      • just brew it!
      • 10 years ago

      IIRC the x86 emulation was actually too slow to be truly useful by the time it was introduced, since x86 performance had advanced by leaps and bounds during the endless Itanium delays.

      • jensend
      • 10 years ago

      The hardware x86 emulation engine was ditched because it was so slow that Intel realized they could do better in software: §[<http://en.wikipedia.org/wiki/IA-32_Execution_Layer<]§ (software emulation of an ISA which uses just a few registers on a chip with a whole bunch of them is scads easier than the reverse; Itanium has tons and tons of registers).

        • Ryu Connor
        • 10 years ago

        The fact it took a fourth of the die on Merced was a catalyst for ditching the x86 hardware emulation as well.

    • flip-mode
    • 10 years ago

    And, remember, this is 2 billion (!) transistors at 65nm, while Intel’s desktop processors are 45 and now 32nm. So these chips are probably as big as chips get. Still, Nvidia has done a chip with similar metrics and has been able to sell the chip and the card it sits on for only several hundred.

      • poulpy
      • 10 years ago

      You pretty much nailed it, actually don’t believe what they feed you: what you see on the picture is most likely one of the fab worker’s lighter! (!!!!)

    • jstern
    • 10 years ago

    In the interest of learning. Why are these CPUs so expensive? Are they better than the i7s? I’m assuming that they are used for specific jobs.

      • Scrotos
      • 10 years ago

      Low volume = high price

      Plus they are marketed towards markets that both expect and support this type of pricing. Regular nvidia versus Quadro would be an example. You’re paying for the support and validation and guarantee of quality, basically.

      You also see that with the IBM POWER7/POWER6 versus, say, the PPC970 (Apple “G5”) which powered workstation-class machines. Different markets, different expectations, different abilities. Probably one of the Itanium cores can burn out and the entire thing will keep working, allowing you to hot-swap it in a running system with more than one socket. That’s the kind of stuff you pay for with this market.

        • jstern
        • 10 years ago

        Interesting.

      • just brew it!
      • 10 years ago

      They’re aimed at the market space between x86 and “big iron” mainframe gear. More emphasis on RAS (Reliability, Availability, Serviceability) features, and better scalability.

    • blastdoor
    • 10 years ago

    So is Intel under some contractual obligation to keep making Itanium? Does Itanium really have a future?

      • Scrotos
      • 10 years ago

      Let’s see, Intel killed off SGI’s use of MIPS and the whole PA-RISC line. Also Alpha died at their hands all in the name of Itanium being the future.

      SGI’s dead but I could see HP/Compaq being massively pissed that they transitioned to Itanium in time to let POWER get a great foothold in the big iron market. What if Itanium dies? What are they going to use then? Intel killed off all the other big iron competitors. Some MIPS variant? SPARC64? Their competitor, POWER? I dunno, that’d be another architecture shift. HP/Compaq customers are probably bitter about the last one–gotta wonder how many decided to say “screw it” and go over to IBM or Sun who both seemed to have a longer-term commitment to their ISA and better stability, product-wise.

      I could see there being some long-term contract made with Intel to keep things going, yeah. “We shoot ourselves in the foot for you, you better not stab us in the back!”

      • tfp
      • 10 years ago

      Really without some benchmarks vs Power 6/7 and the latest sparc it’s hard to say. If it’s close to power 7 and wins in one or two items it should have some life in it. I expect they will need to match with Power 6 at a minimum though.

      Eitherway it should be a good improvement over the last offering time will tell if it sells.

      • johnrreagan
      • 10 years ago

      The biggest consumer of Itanium chips is certainly HP. HP-UX, NonStop NSK, OpenVMS, and Linux all have Itanium versions. There are Itanium blades all the way upto Itanium Superdomes.

      Itanium was a joint effort of the two (there are lots of PA-RISC similarities in the Itanium architecture).

      I have no idea on the business agreement between Intel and HP.

    • OneArmedScissor
    • 10 years ago

    I can’t believe they’re making new $4,000 65nm CPUs with such high TDPs in this day and age of “green” electronics, regardless of their application.

    But if you can afford that, you can afford the power bill, I guess.

      • tfp
      • 10 years ago

      If you look at power usage in this space you’ll find 180s – 200W to be normal.

        • OneArmedScissor
        • 10 years ago

        Right, but it also used to be “normal” for single cores to have a 130w TDP. That doesn’t make it /[

          • tfp
          • 10 years ago

          That’s how it is . Power 6 and 7 run near or at 200W. There will be low power stuff out for blades at some point and market segmentation with lower number of cores per chip.

          They are craming a ton of power onto one chip and these are for servers. The power savings comes from going from 16 sockets to 8 because of the performance improvements within the same power requirements. With desktops we can’t lower the power usage by decreasing sockets so they are both lowering power while increasing performance.

          If servers ever get to the point they can do everything on one chip they will start addressing the power issues as well at the socket level while increasing perfromance. Right now it just doesn’t make sense.

      • grantmeaname
      • 10 years ago

      if you look at wattage/performance, these are probably still way more efficient than Nehalem-EX

      • djgandy
      • 10 years ago

      What you going to do with it? Play Crysis?

      The power bill will be tiny when your computation takes only 14 days instead of 28.

      • MadManOriginal
      • 10 years ago

      It’s probably due to the crazy release delays for these chips. A story from 2003 that talks about the codename change says they are scheduled for a 2005 release. Intel showed a working Tukwila at Spring IDF 2008. Now I’m sure a lot more goes in to these than desktop and regular server chips but even a 2 year timeframe from showing a working system is a bit much, let alone from the original devleopment timeframe.

        • OneArmedScissor
        • 10 years ago

        Yes, people can say “that’s how it is” all they want, but the fact of the matter is, they’re releasing an old chip that isn’t quite up to modern standards.

        It’s not so much the TDP itself as the fact that even on the conservatively clocked dual-core version, with a fraction of the cache, it’s still extremely high. That raises a red flag to me.

        I’m just surprised they delayed it so long, but didn’t bother moving it to 45nm. The high-k process made a world of difference on everything else. The 32nm version will undoubtedly make this look very dumb. Coincidence? With Intel, yeah right.

        It was the same deal with Dunnington. It just did not make any sense as a 65nm CPU when they finally got around to releasing it. Then Nehalem came out and leveled it. “Wow, it’s a quad-core that beats a six-core in every way!” Gee, I wonder how Intel managed to make that happen? 😉

          • BiffStroganoffsky
          • 10 years ago

          These parts are destined for a market that has a low tolerance for faults and thus require a prolonged validation process for integrity. Consumers who power down each night or whose system spend 95% of its life in idle don’t have the same need as systems running at or near capacity 24/7 crunching numbers or some type of data. If they were to migrate to a new tech each time a new processing method became available, it would be an endless validation cycle with nary a product ever being released, a la Duke Nukem Forever!

            • OneArmedScissor
            • 10 years ago

            Surely, but the fact that they took so long, made other changes, but kept it 65nm, and then are skipping to 32nm, is highly suspect (if you ask me.)

            • SPOOFE
            • 10 years ago

            /[

            • blastdoor
            • 10 years ago

            Isn’t Power 7 targeting the same market, and isn’t it on 45 nm?

            All the excuses people are making for Intel are true up to a point, but they don’t go far enough to explain 65nm. I think the real explanation is that Itanium is an afterthought for Intel at this point whereas Power 7 is a much higher priority for IBM.

      • tfp
      • 10 years ago

      To me the funny part is if they would have been on 45 nano or 32 nano they could have either 8 cores and 48MB cache or 16 cores and 96MB L3 at the same size.

      The process tech is going to hurt them at a performance per socket standpoint until the next chip is out…

        • DaveJB
        • 10 years ago

        They wouldn’t have a prayer of producing this thing on the 32nm process; it just isn’t mature enough to churn out enough usable Tukwila dies to make a profit. Even the 45nm process would be chancing it, considering the challenges of producing such complex chips at such small gate sizes.

          • OneArmedScissor
          • 10 years ago

          Nehalem EX ought to be equally complex, if not even moreso, but they seem to be getting that out the door pretty quickly.

          I’m sure that making a ginormous monolithic chip is quite a hurdle, but they’ve clearly got it down at this point.

            • jdaven
            • 10 years ago

            Actually, Nehalem is being released at a snail’s pace. Remember the first chips were released in November 2008. Then for all of 2009 they released no more than 10 different SKU’s in the desktop space. Only now, over a year later are they ramping up Nehalem production and increasing the number of SKU’s. Heck, we still only have one Nehalem based chip below $100.

            • OneArmedScissor
            • 10 years ago

            That’s because they have no reason to sell them for anything but the high end. They’re larger and more complex chips with no practical benefit over Core 2 to the average person. That’s not what I was talking about at all.

            I was referring to the fact that only about a year later than the initial release, they’re moving up to a very differently designed, and drastically more complicated, version of Nehalem, that’s an eight core, multi-socket, 2.3 billion transistor monstrosity…and still 45nm.

            Incidentally, those have pretty low TDP levels, actually lower than many 45nm quad-cores, but with caches equally ginormous as the new Itaniums.

            These new Itanium CPUs took years and years and years past when they were supposed to arrive, and they still appear a bit goofed up compared to what Intel are obviously capable of.

            • Shining Arcanine
            • 10 years ago

            Cache is just a form of memory, which is the simplest thing they can fabricate, so the majority of these chips is not hard to fabricate. The actual processing logic is the hard part.

            • stdRaichu
            • 10 years ago

            Remember that cache in CPU’s is SRAM; it’s much faster than DRAM (esp. WRT latency) at the expense of being much denser; six transistors as opposed to one transistor and a capacitor. 24MB of SRAM is no mean feat.

            • DrDillyBar
            • 10 years ago

            How many make up Fermi? 3B?

            • cegras
            • 10 years ago

            Fermi is not as complex as the logic on the nehalem chip. Furthermore, most of the chip is redundant, and if you remember is just a bunch of small cores put together.

            • jdaven
            • 10 years ago

            I still don’t believe this claim. Please provide a link that says GPUs are less complex than CPUs in general and that Fermi is less complex than Nehalem specifically.

            I think this is false conventional wisdom but would enjoy the opportunity to be proven otherwise but this proof must contain links and not just a general feeling about things.

            • just brew it!
            • 10 years ago

            GPUs are less complicated because you’re basically taking a functional block which is fundamentally a lot simpler than a general purpuse CPU core, and duplicating it dozens (or hundreds) of times. GPUs can do this because they can exploit the massive parallelism inherent in rasterized rendering.

            • DrDillyBar
            • 10 years ago

            I do agree, though my point is dead.
            But what about an SSD the size of a dinner plate then?
            (heat, yeah…)

    • pogsnet
    • 10 years ago
      • tfp
      • 10 years ago

      What’s the length of the Power7 pipeline. Running at 4Ghz I kind of wonder if it isn’t pretty long.

      Threading on chip “fixes” the problem of idle resources caused by an overly wide processor the other is a very long pipeling. I believe itanium was very wide but not a long pipe. Power7 looks to have a longer pipeline just like the power 6 however I think they have increased the amount of computation resourses per core making it a much wider CPU than before. That would explain the lower clock speed vs power6 and yet being able to keep more threads busy.

      §[<http://en.wikipedia.org/wiki/POWER6<]§ §[<http://en.wikipedia.org/wiki/POWER7<]§

        • Anonymous Coward
        • 10 years ago

        Over at Ars, they speculate that it is pretty closely related to the good old PPC970 and Power4. IBM may be able to hit their clock speed targets partly by the luxury of a huge TDP allowance.

        I would like to know more about why and how the Power7 might be related to the Power4.

Pin It on Pinterest

Share This