Some speculation about that Larrabee die shot

We don’t yet know as much as we’d like to about Intel’s upcoming Larrabee GPU-CPU hybrid, but enough useful information has leaked out over the past little while to give us the ability to speculate a bit. Intel has disclosed many of the architecture fundamentals, but one of the big missing pieces of the puzzle has been the specific number of cores and other types of hardware that the first implementations will have. The release of a fuzzy die shot yesterday, therefore, caused a bit of a stir around here, with the TR editors sitting around peering at their monitors and exchanging puzzled IMs about what’s what.

I started forming some theories eventually, and after poking around online, I was pleased to see that some folks in the B3D discussion thread had some similar ideas. We don’t really know much about the particular chip shown in the die shot, but given what we know about the architecture from Larry Seiler’s Siggraph paper and Michael Abrash’s overview of the instruction set, some possibilities become apparent.

If you look closely at this high-res version of the die shot, you’ll see that the chip is laid out in three rows. The design of the chip looks to be fairly modular, with repeating areas of uniform structures of several types. The most common unit of the chip is most likely the x86-compatible Larrabee shader core, and the dark areas at the ends of its long, rectangular shape are probably cache of some sort, either L1, L2, or both. We know that each core has L1 data and instruction caches, plus 256KB of L2 cache. By my count, there are a total of 32 cores on the chip—10 on the top row, 12 in the middle, and 10 in the bottom row.

Along with the cores are two other types of regular blocks on the chip. The larger of these two is a little narrower than a core and has a lot of dark area, which suggests cache or other storage. I count eight of those. There’s also one other block type, a narrow column, of which there are four total, two in the top row and two in the bottom. (After I had sorted all of this out myself, I saw this B3D post with an excellent visual aid. Worth a look if you can’t identify what’s what.)

My best guess is that the eight larger, dark-and-light blocks are texture sampling and filtering hardware. Larrabee doesn’t have as much dedicated hardware as most GPUs, but it does have that.

After spending some quality time with the color-coded RV770 die shot at the bottom of this page and noodling it around with David Kanter, who bears no responsibility for any of this mess, I’m betting the logic bits running along the upper and and lower edges of the die, outside of the cores and such, are the memory pads. I see four repeating patterns there. Kanter notes that the four narrow columns on the interior of the chip are perpendicular to the memory pads. They are relatively evenly spaced, protrude from the edge of the chip into the center, and thus could be memory interfaces and other assorted logic that participates on the bus and talks to the I/O pads. So the magic number for memory interfaces would appear to be four.

David also suggests it might be fun to play "Where’s Waldo?" with the fuses, analog nest, and any other logic we’d expect to find in a GPU. We’re guessing the PCIe interface logic is along the right edge of the chip. Some other unidentified, non-repeating bits are on that side of the die.

I spent wasted some time trying to figure out the relationships between the cores and these other bits of hardware, but there don’t appear to be any clear groupings of blocks or physical alignments between cores and texture units. More than likely, each of these resources is just a client on Larrabee’s ring bus.

Happily, with no more information than that, we can tentatively pretend to start handicapping this chip’s possible graphics power. We know Larrabee cores have 16-wide vector processing units, so 32 of them would yield a total of 512 operations per clock. The RV770/790 has 160 five-wide execution units for 800 ops per clock, and the GT200/b has 240 scalar units, for 240 ops/clock. Of course, that’s not the whole story. The GT200/b is designed to run at higher clock frequencies than the RV770/790, and its scalar execution units should be more fully utilized, to name two of several considerations. Also, Larrabee cores are dual-issue capable, with a separate scalar execution unit.

If I’m right about the identity of the texture and memory blocks, and if they are done in the usual way for today’s GPUs (quite an assumption, I admit), then this chip should have eight texture units capable of filtering four texels per clock, for a total of 32 tpc, along with four 64-bit memory interfaces. I’d assume we’re looking at GDDR5 memory, which would mean four transfers per clock over that 256-bit (aggregate) memory interface.

All of which brings us closer to some additional guessing about likely clock speeds. Today’s GPUs range from around 700 to 1500MHz, if you count GT200/b shader clocks. G92 shader clocks range up to nearly 1.9GHz. But Larrabee is expected to be produced on Intel’s 45nm fab process, which offers higher switching speeds than the usual 55/65nm TSMC process used by Nvidia and AMD. Penryn and Nehalem chips have made it to ~3.4GHz on Intel’s 45nm tech. At the other end of the spectrum, the low-power Atom tends to run comfortably at 1.6GHz. I’d expect Larrabee to fall somewhere in between.

Where, exactly? Tough to say. I’ve got to think we’re looking at somewhere between 1.5 and 2.5GHz. Assuming we were somehow magically right about everything, and counting on a MADD instruction to enable a peak of two FLOPS per clock, that would mean the Larrabee chip in this die shot could line up something like this: 

  Peak

pixel

fill rate

(Gpixels/s)

Peak bilinear

texel

filtering

rate

(Gtexels/s)

Peak bilinear

FP16 texel

filtering

rate

(Gtexels/s)

Peak

memory

bandwidth

(GB/s)

Peak shader

arithmetic (GFLOPS)

Single-issue Dual-issue
GeForce GTX 285 21.4 53.6 26.8 166.4 744 1116
Radeon HD 4890 13.6 34.0 17.0 124.8 1360
LRB die 1.5GHz 48.0 24.0 128.0 1536 1620
LRB die 2.0GHz 64.0 32.0 128.0 2048 2160
LRB die 2.5GHz 80.0 40.0 128.0 2560 2700

In the numbers above, I’m betting that GDDR5 memory will make it up to 1GHz by the time this GPU is released, and I’m counting on Intel’s texture filtering logic to work at half the rate on FP16 texture formats. We can’t determine the pixel fill rate because Larrabee will use its x86 cores to do rasterization in software rather than dedicated hardware. I’m just working my way through Michael Abrash’s write-up of the default Larrabee rasterizer now, but I don’t think we can assume a certain rate per clock given how it all works.

Obviously, clock speed makes a tremendous difference in this whole picture. Nonetheless, we’re looking at a potentially rather powerful graphics chip, at least in terms of raw, peak arithmetic. If the tile-based approach to rasterization is as fast and efficient as purported, then the relatively pedestrian memory bandwidth quoted above might not be as much of an obstacle as it would be for a conventional GPU, either.

That’s my first crack at this, anyhow. Would be cool if I turned out to be more right than wrong, but it’s all guesswork for now. At the very least, one can begin to see the potential for Larrabee to compete with today’s best DX10 GPUs. Whether or not it will be effective enough to contend with tomorrow’s DX11 parts, well, that’s another story.

Comments closed
    • stmok
    • 10 years ago

    So far, only Intel’s SIGGRAPH 2008 paper indicates a ballpark idea on performance. With the cores at 1Ghz and the requirement in maintaining 60 FPS at 1600×1200 resolution:

    Approximately…
    * 25 cores are required for Gears of War (no antialiasing).
    * 25 cores for F.E.A.R (4x antialiasing).
    * 10 cores for Half-Life 2: Episode 2 (4x antialiasing).

    So a 24-core retail version with cores doing 1.5Ghz is capable of holding its own.

    No, its not going to blow either ATI or Nvidia out of the water or hold any top spots.

    I am expecting its going to sit somewhere in the mainstream market. (Faster than Geforce 8600GTS, more likely around Geforce 8800 in performance).

    It all boils down to developer tools and drivers. (No, the Intel IGP team is NOT the same team who is working on Larrabee project).

      • MadManOriginal
      • 10 years ago

      I hope you’re talking about the mainstream from a year and a half ago or more.

    • OneArmedScissor
    • 10 years ago

    The bottom line is that it’s not going to be much faster, if at all, than the next line of ATI and Nvidia cards it has to compete against.

    …but it could cost WAY more. They are very hush-hush about specs, but I keep seeing that the die size is speculated to be over 600 sq. mm, which is astronomical. The cut down version will still be based off of the full chips, but will be much slower, so there’s no affordable/cost effective version for them to break their way into the market with.

    This makes absolutely no sense to me. The high end has moved way down in price, period, but the real problem is that even the LOW END is becoming very comparable, and is more than good enough for most people.

    Intel could be introducing one of (if not the very) largest and most expensive to manufacture chips, at a time where there will undoubtedly be $99 Radeon 4890 equivalents.

      • MadManOriginal
      • 10 years ago

      Yes that’s true and I’d thought something similar. Thing is when the design process started, probably a few years ago, shooting for a $500 highend part wasn’t unreasonable. Of course these die size estimates are just guesses from a picture of a platter encased in lucite for display purposes so…

      • shiznit
      • 10 years ago

      It makes no sense to you because you are not looking long term. Intel is not playing to win with Larrabee 1, they are playing to win at 32nm and 22nm and beyond. So they might have really low margins on the first generation, they might have already factored that into their plans. Remember, intel turns sand into chips, they don’t have to pay anything to anyone and a 600mm^2 die might not cost them as much to make as what NV and ATI pay TSMC.

        • djgandy
        • 10 years ago

        And when a company is showing off a new product they often show off the top end, creme de la creme model. Now that may turn out at 600mm2 but that is not outrageous when Nvidias GT200 was 576mm2. Let’s not forget a 32 core Larrabee is likely do 2 TFLOPS.

        I’d expect to see higher clocked 16/20/24 core models that will crank out the value. Something like 16 cores @ 2.5GHz = 1,280 GFLOPS, ~300mm2. That’ll be going up against the GPU’s of today, so will be a high/mid-range card by 2010.

        I don’t expect graphics to be profitable for Intel at first as there is a huge price war in the market at the moment. AMD’s graphics division made $1M or -$5M depending on which numbers you use) and Nvidia made a fairly heavy loss. With three in the market, I can’t see that getting better for either of them.

        Then of course Intel’s 32nm will be late ’09, early 2010 probably doubling the amount of die per wafer they’ll be able to get. I think that’s when we’ll see what Larrabee is really about and some extremely cheap GPU’s also, not that they aren’t right now.

          • OneArmedScissor
          • 10 years ago

          I am looking at it in the long term…but I’m also considering the present, which I don’t think you guys are.

          We already have single cards that do 1.6 teraflops.

          The Radeon 5000s were originally slated to launch in June, which they probably will not now, but nonetheless, over 50% more powerful single GPU cards are what we’re already staring down the throat.

          There are going to be cards that do over 2 teraflops well before Larrabee comes out. The 1.6 teraflop level of cards of today will be knocked down to the $100 range by this time.

          And this is still months ahead of the initial launch of Larrabee. When Larrabee does launch, all bets will be of, and ATI and Nvidia will probably go completely apeshit to steal their thunder.

          If the Radeon 5870 is a MIDRANGE part at LAUNCH, as is very likely, and there is a 5870 X2 and even a 5870 X4, as also has been speculated, and is completely probable, what on earth is going to be the point of even a significantly faster Larrabee?

          The 32nm version is expected to have 48 cores. That’s not going to save it, because it’s going to take EVEN LONGER to get there. It’s not like the 45nm one is going to come out, and then the 32nm one will be released at the same price, the next month, while ATI and Nvidia sit around powerless to do anything.

          Of course they show off the highest end one, and there will be cheaper, lower end derivatives. But if the cut down version of the highest end chip is 24 cores, as it’s supposed to be, then there goes ONE THIRD of its power. That suggests that anything else would start from half the power or less, and that simply isn’t going to compete with the type of cards that will be $100 by then.

          They can’t “play to win,” and they can’t compete with the low end, either. They’ll just be kind of sitting there in the middle of things, not making any sense.

            • SPOOFE
            • 10 years ago

            /[< If the Radeon 5870 is a MIDRANGE part at LAUNCH, as is very likely, and there is a 5870 X2 and even a 5870 X4, as also has been speculated, and is completely probable, what on earth is going to be the point of even a significantly faster Larrabee?<]/ Yes, you're right, assuming the absolutely best-case scenario for AMD and a worst-case for Intel, what is the point in them doing anything? They might as well close shop now, they have no hope of ever making a dent in the invincible, insanely wealthy, and never-faltering AMD. EDIT: Sorry, I should have USED more ALL-CAPS to show how RIGHT I AM.

            • OneArmedScissor
            • 10 years ago

            I’d shrink the text, too, but there’s no way to do that.

            I never said they should “close shop” and that “there’s no way they’ll ever make a dent.” Your words, not mine. That really has nothing at all to do with what I was talking about.

            Maybe saying “What is the point?” lends itself to being misunderstood, but what I am asking is, what is the point in arguing that Larrabee will get better as time goes on? You can figure out how much faster it’s going to get, just like with the rest of them.

            People like to argue based on numbers, as that’s all there is to go by, and the numbers don’t support anything in Intel’s favor. The fact that Larrabee can do over 2 teraflops doesn’t make it special, and the fact that it will be 50% more powerful once it’s 32nm doesn’t, either.

            The 5870 only has 50% more processors than the previous generation and it will be about 2.2 teraflops. It doesn’t have to be an expensive card, either.

            A 5870 X2 would be about 4.4 teraflops. If there is an X4 (highly possible with dual-core GPUs and considering the chip sizes), it would continue the trend and work out to about 8.8 teraflops lol…

            The GT300 is basically double a GT200, and will likely be even more powerful than a 5870, though more expensive.

            Again, this is the kind of thing that’s already next in line, while Larrabee is a ways off, and the possibility of improvements for it, which are even further away, still won’t be anything that can’t be replicated or outdone by others.

            There’s not particularly any “assuming the best case scenario for AMD” involved. Both AMD and Nvidia are going to be beyond the 2 teraflop point well in advance, and they’re not going to stop moving when they get there.

            So if Intel has a 3-4 teraflop single card at 32nm…so will everyone else, or better, at or about the same time. Intel doesn’t quite have the advantage there like they do over AMD’s CPUs.

            Larrabee may be powerful, but it’s going to have stiff and very cheap competition from both sides, which isn’t going to go away just because they move to 32nm.

            • zima
            • 10 years ago

            But I thought Larabee is “a GPU from 2005”? Now suddenly most agree it’s comparable…

            • djgandy
            • 10 years ago

            X2 versions do not downgrade top of the range single card solutions to mid-range. With that argument we can say that the i7 965 is mid-range as four i7 965’s are faster and more expensive thus satisfying what seem to be your elitist requirements.

            Mid-range is subjective, but most people understand what it is. 8800GT was mid-range, the 9600GT also, same goes for the 4850. High end is not what is theoretically possible.

            There are no 1.6TFLOP cards out today for single slot.

            • OneArmedScissor
            • 10 years ago

            It does at least in regards to how AMD is placing it in their product line. Rather than a 4850 at $200 and a 4870 at $300+, it will be a 5870 at $200 and a 5870 X2 at $300+, with a much more expensive X4 further down the line, as always.

            The X2s are still going to be single GPU, just dual-core, hence the distinct possibility of X4s. Is a dual-core CPU not a midrange part compared to the typical quad-core?

            The 1 GHz Radeon 4890s are 1.6 teraflops.

            • djgandy
            • 10 years ago

            Dual core GPU’s? You mean glued then? GPU’s are already “Multi-core”

      • shiznit
      • 10 years ago

      double post

    • bigbaddaddy
    • 10 years ago

    It is all about software. The GPU (red & green) guys haven’t been able to provide a consistent software development platform to provide stability longer than 2 GPU cycles. You can get by with a few wrong pixels on screen but when it comes to computation, long term stability and stable software development environment are the winning tickets. Can Intel deliver this? Unlikely, given Intel’s history in software.

      • tfp
      • 10 years ago

      History with software? There is a difference between not putting a lot of effort into something, like the current embedded graphics drivers vs software they do put a ton of effort in, the intel compilers for example.

      Intel has pretty much the best x86 compiler and libraries around, there is no reason they can’t also make good graphics drivers as well. It should be expected to have some amount of teething issues at the start. Looking at ATI and Nvdia who have been doing graphics drivers for high performance hardware for years, they throw lemons out there every so often.

        • djgandy
        • 10 years ago

        Thank you for clearing that up for him. Intel is not just a chip company. They do so much with software people do not realise. You can’t rate a company on its crap drivers for a crap graphics processors intended to just make a pc world. Aren’t most IGP problems with games / dx10? Is the IGP made for games? No.

    • thermistor
    • 10 years ago

    Post #44 and no one mentioned “Crysis.” I’m appalled.

      • indeego
      • 10 years ago

      Probably because we have to set standards for introducing the crysis namedrop in articles.

      Nobody thinks Intel can actually make drivers from scratch that will get crysis working. That would be the second coming of FSMg{<.<}g

      • MadManOriginal
      • 10 years ago

      But YOU just mentioned it. The irony!

    • Mat3
    • 10 years ago

    I highly doubt 2.5Ghz is realistic anytime remotely soon. To reach those speeds means longer pipelines which means bigger cores. You can either have more cores, slower and smaller, or faster cores and fewer of them. Intel doesn’t have any magic formula.

      • Mat3
      • 10 years ago

      And even if the main cores are running about 2+Ghz, that doesn’t mean the texture units will. In fact, they will certainly be clocked slower.

    • axioms
    • 10 years ago

    Setting game performance aside. I am really curious how this card will perform in the CG/Content creation department. If it does perform well will it be a cheaper alternative to the Firepros and Quadros of the world.

    • Hattig
    • 10 years ago

    Larrabee is going to have some problems. Power consumption is one, 32 fast cores will eat a lot of power, and the transistors will be optimised for density, not power consumption like Atom. Also the in-order design of the cores may limit clock speeds. Around 1.5GHz seems likely. Secondly the die size is rumoured to be around 600mm^2 (around 15mm^2 per core inc. cache, plus other logic). I suspect that this will be competing price-wise with Radeon X2s. The RV870 was rumoured to be over 2TFLOPS, so that’s 4TFLOPS in the X2 configuration.

    Then we get the driver worries.

    As for consoles, will Microsoft want to switch to Larrabee, or will they want to dump an “R900” design (conveniently coming at the time that AMD will be doing their next major redesign after RV870, think about it) and a 12 core version of their current CPU running at 5GHz in? Sony? Surely they’ll be PowerXCell 32 + GT300/400. Regardless, they’ll all excel at 1920×1080 gaming.

      • Anonymous Coward
      • 10 years ago

      Seems to me that for consoles, the obvious choice is something with a more favorable power+cost+performance arrangement than Larrabee can offer, regardless of pain to software developers.

      It will be very interesting to see where Cell ends up going. Surely the Cell alliance is not just sitting idle while Larrabee rolls them over. Can Cell offer a performance per watt or per dollar advantage over Larrabee?

      • tfp
      • 10 years ago

      /[

    • matnath1
    • 10 years ago

    Someone please explain the point of these chips to me. There is no way in hell they will ever come close to out performing dedicated graphics boards. A pure processor, not “waited down” with graphics compute responsibilities should perform its tasks faster when teamed up with a pure graphics board which will perform its task faster than.. Hybrids are going to be slower at both correct? Jack of all trades master of none? We shall see.

      • just brew it!
      • 10 years ago

      CPUs and GPUs have been converging for several years now. All modern GPUs are capable of executing code, and CPUs have extensive SIMD support (in the form of SSEx and the upcoming AVX extensions). Taken in that light, Larrabee is not as far-fetched as it might seem at first glance.

      It is still a fairly big jump, but Intel has the best manufacturing process in the business. This could make up for the extra overhead of shoving a bunch of x86 cores onto the GPU.

      • djgandy
      • 10 years ago

      Ah right, because it is x86 compatible it’s not dedicated? You obviously missed the part about the 16-wide SIMD units. So what do you think Nvidia and ATI have doing their processing? Mysterious voodoo transistors? Nope, just boring old SIMD units.

      The scalar cores could even make this thing faster as you can load a small amount of code into them then they can perform logic operations etc without the need to return to the main system CPU. Anyone who has programmed with CUDA will know how awful scalar performance is on Nvidia GPU’s.

    • kilkennycat
    • 10 years ago

    The graphics threat with Larrabee is Intel’s trojan horse, designed to confuse and frighten ATi and nV. Will be slow to develop because of the Intel shortage of graphics-driver expertise and near-zero cooperation with the major game developers. Remember that both ATi and nV drivers carry a long tail of legacy-game functionality. Would you be happy with a graphics peripheral that was only able to run current games without any artifacts? And with useless Intel promises on legacy support.

    The excuse that the driver writers for Larrabee “are quite different from those on the pathetic-functionality Intel IGPs” and thus will be wunderkids is a little far-fetched. Er, where did these wunderkids come from? Pirated from the top echelons of nV or ATI? Not likely without a bunch of legal obstacles.

    The real focus for Larrabee is its potential as the true next-gen Cell processor wherever the Cell is currently used and its consumer-product adoption in various cell-sizes (er, CPU-core-numbers) in all next generation consoles and a variety of other consumer products.

      • zima
      • 10 years ago

      Intel bought Project Offset developers, so there might be a great looking & running game ready for Larabee launch. Or even great engine to license for almost nothing (not like it will be significant direct source of money for Intel anyway)

      Also, don’t pretend ATI & Nvidia don’t have problems with legacy games – I seem to remember for example there are huge graphical artifacts in old Thief games or System Shock 2, due to new Nv chips/drivers poorly supporting 8bit textures.

        • ish718
        • 10 years ago

        Intel’s acquisition of Offset software is mainly to showcase the power of Larrabee.

        Project offset is going to “max out” larrabee tellin from the visuals

          • zima
          • 10 years ago

          Nothing wrong with “maxing out” per se; I thought it’s good to have your hardware fully utilised?

            • ish718
            • 10 years ago

            I never said it was bad. I’m hoping it actually does “max out” the high end Larrabee.
            Project Offset will definitely turn out be well optimized for Larrabee though. Since the developers will actually be working directly with the Larrabee team to maximum performance.

      • djgandy
      • 10 years ago

      Well you don’t seem like one to comment as you seem to lack understanding of programming or what a driver actually is.

      It would seem that you think there is only one solution to a problem and that Nvidia and ATI have solved that problem and no one else can solve that problem except them.

      A graphics card is merely another device. Considering Intel makes X86 CPU’s which have to deal with virtualisation, page tables, debug registers and all the rest, building a device that is far more simple, such that it does not have to deal with these things, should be a lot easier.

      Of course they have less expertise, but how hard is the concept of multiple SIMD cores?

      Main issues with a graphics cards are;
      Ensuring the design is efficient such that it uses as little die space per FLOP
      Feeding the cores with enough memory bandwidth and data so that as near to peak performance as possible can be achieved.
      Achieving an optimum equilibrium between cores/SIMD units/Clockspeed and all the above

      It’s really not that difficult of a concept. Nvidia doesn’t have to worry about branch prediction cache coherency (except for their tiny const cache) and all the other problems involved with a CPU.

      Don’t get me wrong, I am not saying the task is simple, but when you have the best minds there is, it is not the challenge people make it out to be. People seem to hype graphics cards because they have high FLOPS ratings. There is a good reason for that, they are designed to.

      Now drivers merely act as an abstraction layer between software and device. With a capable device, it should not be too difficult to implement DX11 specifications and OGL. Lets not also forget that a device with power is aimed at gamers thus DX/OGL is an important factor.

      When you are selling a device to which 90% of them just sit running vista and MS word the only thing that matters is that they do that task. Why would you spend a hell of a lot of money hacking together drivers that implement software level fixes for a rubbish graphics card?

      At the end of the day there is a good reason GMA makes up about 90% of all integrated graphics. It’s cheap. It didn’t get cheap by dedicating al the best engineers to write DX10 drivers so idiots can attempt to play crysis.

    • d0g_p00p
    • 10 years ago

    I hope Intel has a good driver team. This has always set them back. This was ATI’s problem back in the 8500 days. When the 9700Pro hit with great drivers, it started a trend with them. Awesome hardware and great drivers. Let’s hope Intel can do the same.

    • wibeasley
    • 10 years ago

    Can someone explain this sentence to me, “l[

    • zagortenay
    • 10 years ago

    They are taking huge risks. I hope they fail miserably and loose a lot of money. The ugly and dirty company Intel.

    • UberGerbil
    • 10 years ago

    This is nice work. There’s always something entertaining about devoting enormous intellectual resources to scant data to generate more insight than seems initially possible. Even if it turns out to be May Day Parade Review Stand Kremlinology, the exercise is often illuminating in itself.

    And your conclusions certainly sound plausible. I wonder how much downstream circuitry is on the chip itself (in those “non-repeating”bits), vs what will be in discrete form on a finished add-in board or motherboard. QPI? Displayport support?

    I’d have to give this a lot more thought, but it may be possible to work backwards to figure out a rough range for the clockspeed — given your assumptions, what clockspeed is necessary to achieve the minimum commercially viable performance of a software renderer?

    • ClickClick5
    • 10 years ago

    Interesting. I do wonder these three things though:
    1) What would the power draw be for this?
    2)Will Intel release regular graphics drivers for this? And would they be better built then their current integrated units?
    3)Will this take off? Will this fellow be the next big revolution for GPUs?

      • this_rock
      • 10 years ago

      You are right on the money with number 2. Sure, they can build the hardware, but graphics drivers is not something Intel is known for. They will need to produce on that front for Larrabee to have a chance.

        • ish718
        • 10 years ago

        The team that will develop drivers for larabee is not the same team that makes drivers for Intel’s IGPs…

          • ClickClick5
          • 10 years ago

          All I can say is they better not be. For this to live, well working drivers out of the box will be required.

        • djgandy
        • 10 years ago

        DUH becausae its a crappy graphics part.

        You have no idea how much R&D Intel does in the area of software. You do realise windows is built on x86 right? Do you think it is by luck the x86 architecture supports the instructions required to run an operating system?

        How about chipsets?
        Network cards?
        Raid controllers?

    • Meadows
    • 10 years ago

    Interesting research, fanboys everywhere are hoping it’s not true.

    It would make current generations look bad (remember, intel was the guy who made all those crappy graphics decelerators), and on paper, it could stand its ground against the upcoming generation from AMD/nVidia, too.

      • Kaleid
      • 10 years ago

      I wouldn’t mind more competition but I’d rather it not be Intel, which is a company that already is too large.

        • SomeOtherGeek
        • 10 years ago

        Yea, but we need them or nothing would move forward… So, I’m glad they are spending the billions to mke billions more for new research/technology.

          • poulpy
          • 10 years ago

          /[<"Yea, but we need them or nothing would move forward..."<]/ Errr the GPU industry has been moving forward quite steadily without any help from Intel so far (although one could see their integrated GPUs as an attempt to slow down progress I give you that).

            • Meadows
            • 10 years ago

            Sure, because nVidia and AMD helped design PCI-express.

            • poulpy
            • 10 years ago

            You know if you have nothing to add on topic it’s ok to give the Reply button a rest once in a while, nothing wrong with that.

            • Ikeoth
            • 10 years ago

            /[

            • poulpy
            • 10 years ago

            Gee don’t bite at his bait..

            The topic was the internals of GPU design (cf shot of the Larrabee die) and the OP said /[<"Yea, but we need them or nothing would move forward"<]/ to which I pointed out that GPU design wise Intel has never contributed anything.. Not having a go at Intel it's just that as GPU design comes they've -so far- been non-existent. Now sure -as a side note- as masters of their platform and behemoths they develop buses and slots that the industry uses, and that's not Nvidia nor ATI's place to do so, suggesting it is just ludicrous. Hence "don't bite at his bait.."

            • Meadows
            • 10 years ago

            You said intel didn’t help the GPU industry “move forward”.
            Now if designing improved interconnects isn’t /[

            • travbrad
            • 10 years ago

            Real geeks solder their cards directly to the mobo 😉

            • poulpy
            • 10 years ago

            Glad you agree you don’t know what is, we’re saving time here.
            When the topic is gpu design they haven’t contributed anything. Period.
            They were the only player to be able to develop the buses on their own platform, saying ati nor nvidia didn’t is stupid.
            Keep your personal attacks to yourself.

      • Ikeoth
      • 10 years ago

      I welcome the competition, Let there be cheap gfx horse power for all.

Pin It on Pinterest

Share This