Intel to focus on ‘processor graphics’ for now

Last December, Intel put an anticlimactic end to the Larrabee hype machine, effectively canceling the discrete graphics processor and relegating it to a development platform. Stories posted since then have hinted that Intel will wow everyone with a discrete GPU when we least expect it, but the latest post on the company’s official blog suggests the contrary.

After opening with general statements about graphics, Intel Media Relations Director Bill Kircos gets to the meat of the subject:

We will not bring a discrete graphics product to market, at least in the short-term. As we said in December, we missed some key product milestones. Upon further assessment, and as mentioned above, we are focused on processor graphics, and we believe media/HD video and mobile computing are the most important areas to focus on moving forward.

Kircos elaborates, “Our top priority continues to be around delivering an outstanding processor that addresses every day, general purpose computer needs and provides leadership visual computing experiences via processor graphics.” Intel is increasing funding toward that goal, he says. (By “processor graphics,” Intel is of course talking about the integrated GPUs inside its dual-core, 32-nm processors for both desktops and laptops.)

At the same time, the company intends to continue working on research, development, and proof of concepts for “Intel architecture-based graphics” as well as high-performance-computing products. Speaking of the latter, Kircos’ post provides an update on the fate of Larrabee, mentioning a business opportunity and a “server product line expansion.” Intel plans to make more details public at the International Supercomputing Conference in Hamburg, Germany next week.

Comments closed
    • Flying Fox
    • 9 years ago

    Intel must have run into some serious problems. It is not like they give up that easily…

    • Voldenuit
    • 9 years ago

    l[

    • deinabog
    • 9 years ago

    I’m not surprised by this. Intel’s first effort at discrete graphics (i740) didn’t do very well. Larrabee always sounded like a “me too” effort rather than an earnest attempt at a competitive product.

    At this point it would be better for Intel to buy either Nvidia (not likely) or the graphics division of AMD if they’re still determined to make a discrete part.

    • YellaChicken
    • 9 years ago

    Reply fail

    • Anonymous Coward
    • 9 years ago

    I’m really amazed to see Intel waving the white flag like this. It appears that Larabee was just a big experiment that they are ready to walk away from it. Seems like they are surrendering an important computing market to ATI and nVidia.

      • Hattig
      • 9 years ago

      I imagine that they’ll have their computing products derived from Larrabee still.

      Eventually they’ll get OpenCL and the ray tracing support perfected and fast enough (maybe on 32nm, or 22nm). They’ll also have “good enough” DirectX drivers.

      Then they’ll try again for the discrete graphics, hoping that modern GPUs will have pushed some major engines to incorporate ray tracing on the GPU for some or all of the rendering process.

        • Anonymous Coward
        • 9 years ago

        Intel apparently has concluded that their x86-speaking complex core design will always use more power and die space than a purpose made graphics core at the same performance level. It seems to me that their only hope is that graphics become so unimportant or stagnant that they can get away with something less efficient and still sell it for a profit.

        I wonder where this places Cell. Is everything between a CPU and a GPU busy dieing out? Are CPU’s simply destined to grow vector processing units?

          • bcronce
          • 9 years ago

          They still learned a lot with Larrabee. It gave them a glimpse of the issues of scaling multi-cores with the current x86 cache coherency protocols.

      • designerfx
      • 9 years ago

      sure are.

      it’s short for “we’ll stick with our absolutely crappy integrated graphics because we didn’t put enough real effort into making something better”.

      disappointing. It’s not like intel would crush AMD or Nvidia, but having more competition is good.

      • HurgyMcGurgyGurg
      • 9 years ago

      Another thing to note is Intel has basically developed the technology of optical connections close to a market release because they thought an optical link would be needed for Larrabee’s bandwidth needs.

      It’s no coincidence that Intel started showing off their optical link demos when Larrabee development was at its strongest.

      I talked with one of Intel’s representatives for Asian firms back when the failure of Larrabee was coming to light. Unsurprisingly, he spent half his time raving about this optical link technology instead.

      They will figure out a way to spin off technology and recoup a significant amount of the cost out of Larrabee.

      If not, this is Intel, they have the money to swallow a defeat in a similar way to Itanium.

      Poor AMD can never get to try grand experiments on its budget though. =/

    • BlackStar
    • 9 years ago

    Intel should just give up and spare us the pain. Just license parts from AMD or Nvidia, guys, and put them in your CPUs if you’d like. We’d have better performance, better video acceleration, improved stability, better support – not to mention more features (DX11, GL4, OpenCL).

    Intel’s driver situation is a nightmare both for developers and for users. I cringe everytime someone asks me for help because game xyz doesn’t run or bluescreens his laptop.

    Just give up already.

      • Farting Bob
      • 9 years ago

      You clearly have no idea about how processors gets made if you think Intel could just “licence” an nvidia GPU and stick it on an Intel CPU package.
      Intel IGP’s are fine for most things except gaming, and no IGP does that well unless your sticking to 8 year old games.

        • BlackStar
        • 9 years ago

        Are you implying that Intel cannot license GPU technology from, say, AMD and spend 12 or 18 months integrating it into its next CPU refresh? Especially considering that AMD is already designing GPUs for CPU integration?

          • thesmileman
          • 9 years ago

          What he is saying is that intel and AMD architectures are completely different and are designed to be manufactured differently. We you are working in <40nm space your chip designs have to be built around how it is going to be made. It would likely take them longer to take a part from AMD find a way to make it work with their current designs that to actual build a product of similar quality. Though I am hesitant to agree with someone named “Farting Bob” he is correct.

            • BlackStar
            • 9 years ago

            That’s only necessary if the GPU is on the same die as the CPU cores. That’s hardly necessary: a first iteration could have the GPU “simply” sharing the CPU package, not unlike Intel’s first multi-core CPUs. This would be at least an order of magnitude simpler than “true” integration, while still resulting in a significant performance and capabilities win.

            In fact, *anything* would be a significant win over the current GPU situation. This may sound exaggerated but Intel *is* hampering progress. There is a reason why e.g. Apple is opting for hotter and more expensive Nvidia GPUs over Intel’s offerings, after all.

            • OneArmedScissor
            • 9 years ago

            “That’s only necessary if the GPU is on the same die as the CPU cores”

            Too late. That’s already years in the past of CPU design.

            • Bauxite
            • 9 years ago

            too much quoting

            anyways, intel combines its own stuff from different patterns (32nm+45nm) onto one package, its not that far fetched to consider they could bring someone else’s design in instead.

      • bcronce
      • 9 years ago

      GPU are very fast on VERY VERY specific types of calculations. My video card is about 1000xs faster than my CPU when it comes to 3D graphics but only ~10xs faster when it comes to distributed computing.

      Intel wants to mix the idea of lots of simple CPUs with strong SIMD to be strong matrix crunchers. This means the flexibility of a CPU with some of the benefit of many-many simple cores of a GPU.

      Intel is probably interested in this to since CPUs and GPUs do “multi-threading” completely differently. This research could help speed up future multi-core CPUs.

      Another benefit of the Larrabee idea is it could actual work as a co-CPU in that the OS could actually off-load threads transparently to it. Imagine your 20 tab Chrome instance running lower power on your co-CPU, or you video en/de-coder transparently adding more threads to speed stuff up.

      Yeah, it won’t be as fast for graphics, but it will be A LOT more flexible.

    • Hattig
    • 9 years ago

    1) Yes, either within two separate dies like Intel’s current solutions, or a single die (Llano, some low-end Intel Atoms).

    2) It’s a GPU. Not a CPU. Not a CPU Core. It does graphics.

    3) Not with Intel’s solution, NVIDIA had to come up with Optimus to deal with that. I’m sure that Llano will Crossfire with compatible devices.

    4) Yes. That’s what Optimus does.

    5) Llano will have that for Crossfire, but Intel’s integrated graphics work with no-one else. Optimus uses the Intel graphics for display output whilst the work is done on the NVIDIA GPU, for high GPU workloads.

    6) Yes.

      • Voldenuit
      • 9 years ago

      Very clear and succinct answers to the questions.

      I’d just like to add that hybrid power schemes on DIY desktops probably won’t happen because of the level of driver and OEM support and certification needed to validate all possible hardware and OS combinations.

      ATI has a potential edge here with Llano, chipset and discrete GPU if everything comes from the Red Team. Also, since ATI has much more mature OpenCL/DX Compute support, I think it’s going to be much more likely that the auxillary GPU on Llano will be less useless than an intel IGP when you plug in a discrete graphics card.

        • aatu
        • 9 years ago

        I humbly thank you for the help, both Hattig and Voldenuit. Especially the answer sheet was very good.

        And yes, I agree on the red team having an advantage on this one. I’ll have to remember this when making the next “big upgrade”.

        • YellaChicken
        • 9 years ago

        l[

    • aatu
    • 9 years ago

    Okay, I haven’t been paying enough attention to these “hybrid” processors, could somebody enlighten me a bit?

    1. So there’s GPU and CPU together in one processor package, cooled with one cooler, right?

    2. Then, how “general purpose” is the GPU part? As in: if I start using discrete graphics (PCI-E), will the GPU part in the package be used as a normal CPU core?

    3. If not, will it at least be “disabled”, so it won’t create more heat?

    4. Is it possible to use the Intel GPU while on desktop use, and only fire up PCI-E when gaming? Or will the PCI-E GPU be required to run all the time?

    5. Can they both (processor GPU and discrete GPU) be used at the same time, working together?

    6. Do both parties have their own versions of these hybrid processors?

      • ShadowTiger
      • 9 years ago

      oops your question was already answered… : D

      • Shining Arcanine
      • 9 years ago

      1. Yes.

      2. No.

      3. In theory it could be. In practice, it does not have to be. It depends on whether or not Intel’s power management mechanisms are designed for that.

      4. Nvidia’s Optimius does that.

      5. In theory, it is possible, but the memory bandwidth required to move stuff around such that computations can be shared would likely become a bottleneck, which would render the integrated GPU more trouble than it is worth. Just to give an educated guess on actual numbers, to have the integrated GPU render 5% of your screen, you will need to sacrifice 50% of your fps.

      6. If by both parties, you mean Intel and AMD, then yes. If by both parties, you mean ATI and Nvidia, no.

    • The Dark One
    • 9 years ago

    I know right now that Intel’s integrated graphics chips are still on their own die, but by the time they merge it all into a single chip, do you think it’ll be some sort of Larrabee derivative, with a cluster of stripped-down x86 cores sitting in with the more advanced core i-whatevers?

      • Ryszard
      • 9 years ago

      There’d be no real point in the smaller x86 cores at that point. They’re basically only there on LRB as the vector front ends (with the vector hardware the real value in the design IMHO), so if you’re taking the vector units onto a normal socketed i-whatever processor in the future, you’d get to drop the P54Cs and use the i-whatever’s front end instead.

      • OneArmedScissor
      • 9 years ago

      The GPU part of Sandy Bridge is not a shrink of their GMA or a Larrabee derivative. It’s something else entirely.

        • NeelyCam
        • 9 years ago

        How do you know this?

        • mczak
        • 9 years ago

        The Sandy Bridge GPU is not just a shrink of their current GMA, but it is certainly still based on the same architecture (which was started with i965). So by the time intel integrates this on-die, it won’t be larrabee based, but that’s certainly not to say on-die gpus can’t be larrabee based later.

      • FuturePastNow
      • 9 years ago

      I think Intel hoped that, if Larrabee were successful, they’d be able to break off 8-16 of its cores to make a new IGP.

      Now… who knows. If Larrabbee really performed terribly, Intel will go to plan B.

    • alwayssts
    • 9 years ago

    The problem I see is that I think Intel is going to be one generation behind ATi/nVIDIA at the low-end. Granted, that whole segment of discreet GPUs is going away (64-bit) with llano and clarkdale/SB, but even then AMD has a massive head start.

    Look at Clarksdale’s IGP. Now look at Llano (Assuming 240sp, 24tmu, 4 ROPs). Do you think 2x is gonna do it for Sandy Bridge? I don’t.

    Okay, so then surely 22nm Ivy Bridge, with perhaps 2x (plus higher clockspeeds) will? By then we’ll have a Northern Islands fusion part, perhaps on 28nm, say something like (wild speculation) 384 4D shaders with 24 TMU and 8 ROPs.

    MAYBE by post-Ivy Bridge, maybe.

    If they don’t have the scalable architecture to compete at the high-end in discreet GPUs against AMD, how using the exact same reasoning could they (any time soon) do it on the IGP-side against the same company that is putting those GPUs on their competing CPUs?

      • Voldenuit
      • 9 years ago

      l[

        • Flying Fox
        • 9 years ago

        He was talking about the segment where Llano and Clarkdarle are targetted. Bulldozer is already mid-to-high end.

        • sweatshopking
        • 9 years ago

        for how long though? eventually, they will be fast enough, and discrete will be gone. it is inevitable.

          • Shining Arcanine
          • 9 years ago

          There are latency and bandwidth constraints imposed on any GPU that is in a CPU package. Latencies are 50% higher because the memory is in DIMMs, rather than soldered onto the motherboard, as Asus demonstrated with their line of motherboards with memory soldered onto them. Bandwidths are also pathetically low.

          Unless those things change, GPUs in CPU packages will never be fast enough.

    • DancingWind
    • 9 years ago

    Intel:
    “… and provides leadership visual computing experiences via processor graphics.”
    LOL 🙂 and I imagine he said it in a completely straigkht face 🙂

      • derFunkenstein
      • 9 years ago

      I think he means martket-size leadership, which they do have.

Pin It on Pinterest

Share This