Core i7 beats Intel IGP in DirectX 10 software rasterizer

Microsoft is quietly developing a software rasterizer that allows x86 processors to render DirectX 10 graphics—and some early performance tests might surprise you. According to an article on MSDN, the Windows Advanced Rasterization Platform (WARP) will ship as part of Windows 7, and a beta version is already available in the November 2008 DirectX development toolkit.

WARP will come in handy in a number of cases, Microsoft writes, like when a user’s graphics card is broken or the graphics driver malfunctions. The company is also targeting casual gamers who want prettier graphics than what their GPU (or integrated graphics hardware) might be able to deliver:

Games have simple rendering requirements but also want the ability to use impressive visual effects that can be hardware accelerated. The majority of the best selling game titles for Windows are either simulations or casual games, neither of which requires high performance graphics, but both styles of games greatly benefit from modern shader based graphics and the ability to scale on hardware if present.

Best of all, DirectX 10 applications designed “without any knowledge” of WARP will reportedly run well using the rasterizer. How well? According to Microsoft’s performance numbers, one of Intel’s new Core i7-965 processors can run Crysis at an average of 7.4 FPS at 800×600 with the detail level turned down. In the same test, Microsoft says Intel’s DX10-capable integrated graphics hardware managed just 5.2 FPS. A GeForce 8400 GS (which is available for $30 these days) outdid both solutions, though, scoring 33.9 FPS.

Microsoft’s work is particularly interesting in light of Larrabee, Intel’s future graphics processor that will contain x86-derived processing cores. As an unrelated side note, Epic Games CEO Tim Sweeney commented earlier this year that a hypothetical resurgence of software rendering could counteract the prevalence of Intel integrated graphics that are “incapable of running any modern games.” (Thanks to Custom PC for the link.)

Comments closed
    • Usacomp2k3
    • 12 years ago

    I just thought of this. I wonder if this will help with GPU-accelerated programs such as CAD. Since the CPU can do double-precision, maybe this will lessen the need for the pro-level video cards.

    • dustyjamessutton
    • 12 years ago

    hmmm, so the GeForce 8 costs less than an intel core i7 yet is still faster? If I am correct, wouldn’t it make much more sense to switch to a new processor architechture similar to the ones GPU’s use and ditch x86? Wait, no, that wouldn’t be good. HEY MARTY? YES BOB? HOW LONG TILL THE PROGRAM IS DONE RECOMPILING TO WORK ON THAT NEW ARCHITECHURE? I DON”T KNOW BOB. MICROSOFT IS STILL WORKING ON RECOMPILING WINDOWS, SO I DON”T THINK THE BOSS WILL CARE IF IT TAKES MUCH LONGER!! OK MARTY, I’M GOING ON A COFFEE BREAK. HEY BOB, GET ME SOME DONUTS WHILE YOUR OUT!! So yes, everything would need to be recompiled for a new architechure. But still, it’s a cool idea to think we have the technology to make ultra-fast processors. Dang, x86 is gettin old.

      • Scrotos
      • 12 years ago

      Um, kinda? Not really? No? There’s been special-purpose hardware that accelerates specific types of workloads forever. Remember when playing a DVD on your 486 required a dedicated DVD decryption add-on card? Maybe not. Well, people weren’t throwing their arms in the air and waiting for a DVD decryption chip to become their new CPU.

      Same thing with GPUs. They are nice and fast on certain workloads, but can’t do general purpose (even though the marketers call them GPGPUs, tricky!) computing as well as CPUs actually designed for that purpose. You could conceptually think of the GPU as a really beefy MMX or SSE or 3DNow! unit on a CPU. Those special instructions really sped up very specific workloads, but weren’t much good on general purpose programs.

      Same thing with your GeForce8 example. Now, if someone could port, say, Linux or BSD to a GPU, that’d be both rad and totally prove me wrong, if it were a functional port rather than a proof of concept that didn’t actually do anything.

    • Jigar
    • 12 years ago

    All i can see here is Larrabee is coming, and Microsoft is opening the doors for it…

      • Meadows
      • 12 years ago

      They don’t want to get on the bad side of AMD or nVidia either, hence: compute shaders.

    • BoBzeBuilder
    • 12 years ago

    Hello.

    • Usacomp2k3
    • 12 years ago

    I think the biggest story here is that for general computing use (office, browsing, nettops), there may not be a need for a gpu at all. If that brings down price or decreases time between iterations, that I’m all for it. Simplicity is always a good thing.

    • pogsnet
    • 12 years ago
    • noko
    • 12 years ago

    Wonder how this will compare to AMD fusion? Really sounds like AMD is more on the right track.

    • berrydelight
    • 12 years ago

    If ATI and NVIDIA were so great at power consumption, then why do Intel IGP get 2 hours more battery life on a laptop?

      • Nitrodist
      • 12 years ago

      What does that have to do with anything?

        • Flying Fox
        • 12 years ago

        They were arguing about the various IGP and power consumption somwhere here in the comments.

      • jdaven
      • 12 years ago

      Please provide a link that proves that Intel’s IGP gives longer battery life. I don’t believe this to be true but I’ll be first to say I’m wrong.

      I think that in general productivity benchmarks yield the same battery life for all IGP chipsets simply because they don’t really use the GPU. Nvidia and ATI chipsets may not last as long for 3D games but then that’s not the point when playing games.

        • accord1999
        • 12 years ago

        /[<#60, Please provide a link that proves that Intel's IGP gives longer battery life. I don't believe this to be true but I'll be first to say I'm wrong.<]/ The new Thinkpad T400 with LED backlit screen gets almost 10 hours of battery life on a 84Whr battery, or around 8.5W of power usage in low CPU usage situations when using the integrated graphics. Under high CPU and GPU load (using the included 3470 graphics card) it lasts around around 3 hours for <30 W of power consumption. With the 53WHr battery, it lasts around 6 hours in typical usage. §[<http://www.notebookreview.com/default.asp?newsID=4569<]§ A reasonably comparable HP LED backlit notebook with a Nvidia 9200 based chipset lasts around 3.5 hours on a 55WHr battery in typical usage. §[<http://www.notebookreview.com/default.asp?newsID=4652<]§ Sure, the Intel IGP is lousy but the mobile chipsets that use it has unmatched power consumption.

    • WaltC
    • 12 years ago

    /[< According to Microsoft's performance numbers, one of Intel's new Core i7-965 processors can run Crysis at an average of 7.4 FPS at 800x600 with the detail level turned down. In the same test, Microsoft says Intel's DX10-capable integrated graphics hardware managed just 5.2 FPS. A GeForce 8400 GS (which is available for $30 these days) outdid both solutions, though, scoring 33.9 FPS.<]/ I see nothing impressive here at all. 8 frames per second is hardly "playable"--it's pretty much a slide show. Let's not even mention at 800x600 with detail turned "down." Nobody who buys a game like Crysis will be remotely interested in doing this. This kind of publicity reminds me of the early pre-V1 days of 3d. 5-10 fps was considered by some to be really pushing it--"exciting" even. We seem on some levels to be constantly looking backwards to the "good old days" of software rendering which really weren't "good" at all. Decent 3d gpus set us free of all of that. I'm glad to see see Microsoft working again on software rendering for 3d, however. Back in the days of software renderers I used to enjoy them because they illustrated so profoundly how much better 3d gpus were for 3d rendering. I expect that the future of software rendering will be much more of the same. Software renderers like this are really going to accelerate the sale of 3d gpus, so I'm all for them.

      • bjm
      • 12 years ago

      /[

        • WaltC
        • 12 years ago

        I really don’t understand your point–software renderers have *always* shown this kind of lackluster performance when compared to gpus–always. There’s nothing new here–at all. Software rendering capability has *always* existed from the very first days of 3d–it is neither new nor imaginative. I don’t see software rendering, larabee or no, ever eclipsing hardware rendering.

        I suppose I could “imagine” that one day it might be different–but what would be the point?…;)

          • bjm
          • 12 years ago

          I don’t disagree with you. All things being equal, specialized hardware will never be surpassed by a software rendering solution on a more generalized processor. Harware RAID and discrete sound cards always show that specialized hardware yields a distinct advantage over more software-based solutions. Even external modems vs. ‘winmodems’. However, that never discredited the viability of their software-based counterparts.

          Now granted, given the ability of developers to continually push the limits of modern day graphics cards regardless of the great strides ATI and nVidia make, the probability of a general x86 CPU with a software renderer being ‘fast enough’ for games is a long time coming. And that is especially true on the desktop.

          However, imagine for a moment a cell phone with a 100-core processor that would handle sound, communications, graphics, and every other processing need. Imagine further that the same graphics, sound and input/output APIs used in that cell phone is the same as on the desktop. With a small screen, a software rendering solution may be preferable should it perform well enough, allowing MS to leverage DirectX on a mobile platform.

          But hey, just a thought…

            • Anonymous Coward
            • 12 years ago

            I say a cell phone is a bad example. That is exactly the environment most likely to maintain dedicated hardware because it is most power efficient.

            Hardware always offers either greater performance or lower power consumption than software which performs the same task. The only way that software can have an advantage if it is doing something that hardware cannot, or if the hardware can be omitted and cost saved. (Given the amount of CPU cores needed to match hardware on performance, we are probably not looking at a cost savings.)

            I doubt that there is any need for hardware sound these days, and the same goes for RAID. As far as I know, all the “hardware” RAID is really just a dedicated general purpose CPU doing its own software RAID anyway. Linux softraid is wonderfully flexible and a joy to use compared to crap like 3ware’s products (which, as I understand, are the best on the market). It seems to me that softraid throughput was always limited by the quality of the SATA controllers, even on P3 servers. (That said, an external $10k 4Gb FC RAID that I used was very impressive while being brain-dead easy to set up and use.)

            I wonder what high speed SSD will do for RAIDs. A lot of new throughput bottlenecks will emerge, for sure.

        • Flying Fox
        • 12 years ago

        I don’t just see the Larrabee stuff yet.

        What I am seeing is, as in the link in the forum thread, a way to avoid the Vista Capable debacle again.

        Plus, if it is at all possible, to use the software rasterizer instead, we would be able to get the “reference rendered image” much easier than before (AFAIK you need to have the DirectX SDK in order to get it). So people may have an easier time to check for optimization cheats and others?

      • MadManOriginal
      • 12 years ago

      Demos like this are fights bewteen various hardware and software vendors for control of future standards and the direction of development. It’s not meant to show that software rendering is comparable in performance to hardware rendering.

    • Envy007
    • 12 years ago

    For a chart of other processors and the 2400 Pro see

    §[<http://tweakers.net/nieuws/57048/windows-7-krijgt-softwarematige-directx-10-ondersteuning.html<]§ It's in Dutch, but the chart is self explanatory.

    • Chillectric
    • 12 years ago

    I think some of you underestimate the power of Intel with Larrabee.

    They have already demonstrated they could make a chip with 1.01 teraFLOPS in just a 62W power envelop.

    They have also bought out Havok, and Project Offset.

      • Flying Fox
      • 12 years ago

      AMD has been claiming big TeraFLOPs in their shaders too, but for a long time (up until recently) they have not been able to translate that to actual game performance.

      So that number is as meaningless as GHz in the P4 days.

    • Chillectric
    • 12 years ago

    I think Intel has bigger plans. I mean their chipsets (without IGP), processors, and now SSDs are pretty much king of the hill.

    Intel takes a lot more time working with their developers to see what is needed in their next instruction sets, then worry about graphics.

    This software rasterizer is really a great thing and will work well with Larrabee along with Intel’s AVX ISA. No more new graphics cards to be “DX11+” compliant, you can just update the software.

      • Meadows
      • 12 years ago

      g{

    • pogsnet
    • 12 years ago
    • Krogoth
    • 12 years ago

    $1,000+ CPU beats a $29 IGP at gaming performance? Okay.

    A cheap Athlon X2/Core 2-based Celeron + HD 4670/ Geforce 8500 combo blows both away for under $200.

      • Mourmain
      • 12 years ago

      Not even that, i7 beat the *integrated* GPU, but the 30$ external GPU beat the i7 four times over!

      This just shows how awful integrated GPUs are.

        • Usacomp2k3
        • 12 years ago

        …for gaming. They’re fine for other things. Heck, the last LAN party I played on my laptop with the GMA950.

    • PRIME1
    • 12 years ago

    TRS-80 beats Intel IGP in gaming….

      • swaaye
      • 12 years ago

      I’m convinced the TI-85 does too. Yup, some of those raytracer Doom clones were mighty impressive…..

      • Krogoth
      • 12 years ago

      Dammit meant to be a stand alone post.

    • kilkennycat
    • 12 years ago

    l[

    • albundy
    • 12 years ago

    intel’s answer to Cuda and Stream? its got a really really long way to go. i’d say in the form of 40 cores or more on a die. but there’s a better chance of old man governator acting in the next sequel…

    • Hattig
    • 12 years ago

    1/5th the speed of an 8400GS … using 4 cores at 3GHz and several dozen watts and a few hundred dollars.

    I think it shows (a) software rasterisation is not a viable option, and (b) Intel should be ashamed of their integrated graphics.

      • eitje
      • 12 years ago

      well, it *IS* after all still in beta. 😉

      • lycium
      • 12 years ago

      exactly correct; much as i love my i7, that’s not how it should be used 😛

    • barich
    • 12 years ago

    I wonder if this will mean that Aero can function without a DirectX9 compliant graphics card. Vista and Windows 7 both run fine on my laptop, but it only has 855GM graphics, so I don’t get Aero.

      • Flying Fox
      • 12 years ago

      That’s kind of what they meant to do, but we won’t know for sure if they still check for a “compliant” GPU in Win7.

      • ludi
      • 12 years ago

      I think that’s kind of the point, actually. If Microsoft can have basic DX10 effects running on any middle-to-high-end CPU that will be available by the time Windows 7 rolls out, then the interface does not need a 2D-only option — the interface uses a DX10-compliant GPU if aviailable, and defaults to software emulation otherwise.

    • Meadows
    • 12 years ago

    I’m not surprised, my grandmother could calculate graphics with an abacus faster than how an intel graphics chip does.

      • echo_seven
      • 12 years ago

      A hand-knitted sweater…..of the windows desktop……lol that would be an awesome Christmas gift.

    • DrDillyBar
    • 12 years ago

    *hugs Quadcore*

    • Ihmemies
    • 12 years ago

    I can’t see how Intel’s IGP is fail if you want to get a low-power solution for your mobile office work. My FPS in Office rises by 0,0% even if I had the best laptop gfx card available.

      • OneArmedScissor
      • 12 years ago

      Because everyone else’s is lower power and higher performance if you need it. Intel’s all-in-one chipsets in general are never anything special.

        • Voldenuit
        • 12 years ago

        They’re not even “All-in-one”, as they need multiple chips (NB + SB) to accomplish what nvidia can do with 1 (9400M) – even with X58.

        LGA1156 will change this, as the PCIE and memory duties will get integrated into the CPU, but it’s too long a wait for Apple, hp, Dell, and everyone else who hopped on the 9400M wagon (and good for them).

        We all see what happens when intel gets complacent (pentium IV, low-power core2quads etc). Old tech at inflated prices. Hell, the ICH10R hasn’t seen any major improvements since ICH7, and the X58 is built on 130nm process!

        Intel the slumbering CPU giant was woken up when AMD challenged them with Athlon XP and 64, but it looks like their foot (chipset and IGP) is still asleep.

          • Chillectric
          • 12 years ago

          X58 is built on 65nm process dispite what wikipedia and TR’s own article say.

      • jdaven
      • 12 years ago

      I would like to see some evidence (a link would be nice) that Intel’s IGP having better power savings than AMD and Nvidia’s IGP. Don’t get me wrong. I’m not refuting what you are saying. I too just think that it should be lower power but when I think about it, I have absolutely no proof to back that assertion up. I mean all IGPs use silicon built into the northbridge. They all have power saving measures. And they all share bandwidth with main system memory.

      The only difference might be when employing the 3D aspects of the IGP but if you are doing just office work, shouldn’t all IGP chipsets give the same power savings and not just Intel’s.

        • OneArmedScissor
        • 12 years ago

        No. For one thing, Intel tends to use older, and sometimes MUCH older, processes for manufacturing the chips for their motherboards, as they do it themselves (far as I know). It takes them over a year to phase out an old manufacturing process for CPUs, and once that’s done, they apparently just switch it over to chips for their motherboards.

        For example, the P45 was a shrink to only 65nm, and this is for their second most recent, non-low end, board, after they had already been manufacturing 45nm CPUs for almost 9 months. Models that are actually old, like their very cheap boards, may very well be using 130mm.

        I don’t know exactly what uses what, but that kind of thing is exactly why the northbridge chip on the Atom boards uses SEVERAL TIMES more power than the CPU itself. And that’s for a platform that focuses on nothing but power efficiency.

        AMD uses 55nm on their more recent boards. It definitely helps them. Nvidia may very well do the same on their newest models.

        Intel’s primary business is CPUs, and focusing on making them as up to date and efficient as possible is much more marketable and therefore profitable to them.

        EDIT: Refer to Voldenuit’s post. I made a booboo, but that’s pretty much all you need to know.

          • Voldenuit
          • 12 years ago

          X58 is 130 nm.

            • OneArmedScissor
            • 12 years ago

            LOL so it is. I guess I confused it with the fact that the processors are 45nm only for the moment.

            So there you go. That explains everything.

            • fantastic
            • 12 years ago

            That is just sad. No wonder.

    • jdaven
    • 12 years ago

    When your GPU is beaten by your CPU, you know its time to rethink your strategy or at the very least hope your customers remain very stupid.

    Anyone buying any computer (laptop, desktop or otherwise) with an Intel IGP should not be allowed a credit card.

    Epic fail!

      • wingless
      • 12 years ago

      +1

      What a pathetic IGP. They should beg AMD to support their CPUs and make good IGP chipsets.

        • pogsnet
        • 12 years ago
          • Scrotos
          • 12 years ago

          Your post is vaguely erotic.

      • jdaven
      • 12 years ago

      Beaten in games I meant to say.

      • Kurotetsu
      • 12 years ago

      Intel IGPs were never meant for gaming or anything graphics intensive anyway. They work fine for anything other than that (browsing, office suites, chatting, etc etc).

        • jdaven
        • 12 years ago

        Then why doesn’t Nvidia and AMD know that. I mean I know they must be stupid or something since they don’t bow to God’s gift to all mankind, Intel Corp but I guess they just keep their stubborn ways of IGPs that blow Intel out of the water in games, video encoding and playback, etc.

        Why can’t Intel make a budget IGP like the current fastest one they got, a mainstream to compete with AMD/Nvidia mainstream offers and a performance IGP for the higher end. Are their engineers stupid? Do they think their not necessary? Or, the most likely scenario, they don’t care since most of their customers don’t know the different between a GPU and a floppy disk.

        I’m sorry to be so pissed off about this, but Nvidia’s CEO is right, Intel has kept back graphics development on the PC for years with their IGP and the low IQ of its customers when it comes to buying a PC.

          • DrDillyBar
          • 12 years ago

          This being a major reason nVidia exists….

          • bjm
          • 12 years ago

          /[

            • Bombadil
            • 12 years ago

            Intel’s IGPs are horrible from a power standpoint. 9W idle for the G45 compared to <1W for a 780G.

            • jdaven
            • 12 years ago

            I was going to say something about this but I didn’t have the numbers. Thank you Bombadil. I think this is why Intel doesn’t talk much about power efficiency on their video graphics hardware website. Other integrated solutions have the same efficiency but also better performance versus Intel’s IGP. The only difference is absolute cost. Laptops and computers with Intel IGP are generally cheaper but from a performance/dollar perspective, you are getting ripped off.

            Why am I having to defend this position anyway?! Almost every single hardware review site in the land criticizes Intel for their inferior IGP’s when their company is capable of so much more (Core 2, Core i7, etc.). Intel knows it too but doesn’t care to change but rather convince companies like MS to cover up the craptacularness of their IGPs (search for Vista Capable lawsuits to understand why).

            • jdaven
            • 12 years ago

            I was responding to another post about IGP’s in general not being for gaming and other graphics applications and the fact that AMD and NVIDIA solutions are.

            As for the rest of your comments, please see my post from #15 with links directly to Intel’s website stating how their IGP’s are for gaming and other graphics applications and that their solution is better than standalone video cards. Here is the link again in case you need it:

            §[<http://www.intel.com/products/graphics/index.htm?iid=prod+prod_ig<]§ No where at this website does it talk about power efficiency but rather the value position of being good enough versus buying a separate graphics card. Again just because you know something about a product, doesn't mean the manufacturer advertises it that way.

            • Kurotetsu
            • 12 years ago

            Ummm, what the hell are you talking about?

            Here’s what I said:

            >>Intel IGPs were never meant for gaming or anything graphics intensive >>anyway.

            Here’s what you said:

            >>I was responding to another post about IGP’s in general not being for >>gaming and other graphics applications

            I MUST be mistaken here and you’re not actually responding to my post. Otherwise, I have NO IDEA how the hell you got that from my post.

            • jdaven
            • 12 years ago

            Sorry my mistake.

            However, you must admit. Intel does advertise their IGP’s as good for gaming and other video applications as per their own website.

            §[<http://www.intel.com/products/graphics/index.htm?iid=prod+prod_ig<]§ Nowhere does it say that they are only good for internet and general office productivity.

            • DrDillyBar
            • 12 years ago

            As I read it, it sounds more like they are simply saying their chips support SM3. It’s SM3 that enables all the whizzbang awesomeness you’re getting so worked up about. When you factor in that IGP’s in the past didn’t focus beyond basic 2D functionality, SM3 IS a nice addition to their products. Nowhere do they claim that you’ll get 30FPS in Quake. Something like Civilizations other turn based strategy games would work fine, and look nice.

          • d0g_p00p
          • 12 years ago

          Yet Intel is still the market leader (in terms of sales) of video cards… errr IGP’s. I think they will try with Larrabee to break into the high end graphics market. I agree with nVidia though and I think Larrabee with end up being Laughabee. Intel has never made a decent video solution.

        • jdaven
        • 12 years ago

        Okay, straight from the horses mouth…

        §[<http://www.intel.com/products/graphics/index.htm?iid=prod+prod_ig<]§ "Get stunning visual results from your mobile or desktop PC with Intel Graphics, including the full Microsoft Windows Vista* experience¹, beautiful video playback, and excellent mainstream gaming performance." Intel then breaks down into more detailed descriptions using the following headlines: Invigorate your video Energize your gaming sessions Get more for your money They even give a game compatibility list: §[<http://www.intel.com/support/graphics/sb/cs-012643.htm<]§ So don't tell me these IGP's were not meant for Gaming and other video capabilities. Just because you know something to be true about a product doesn't mean the manufacturer will try to use marketing to convenience the average consumer otherwise. Intel has always been shady with their graphics, trying to tell people they are on par with AMD/ATI and nVidia. You don't have to look any further than the lawsuit regarding 'Vista Capable' advertising. Intel executives didn't like that their crappy IGP's weren't advertised as being able to render the Vista Aero interface so they got MS to lower the requirements in their advertisements even though the IGP could barely render the desktop graphics. I say again...EPIC FAIL!!!!

          • OneArmedScissor
          • 12 years ago

          They’ve been making a big deal for farkin ever that their integrated graphics support DX10. It was literally a lie for a time, but nonetheless, they have the goal of making people believe it works.

          For the person who asked, the reason Intel can’t do as well as Nvidia and ATI is because they have both been at it for so long, and at this point, when you buy a newer motherboard with their higher end integrated graphics, it’s literally the GPU from one of their current lower end cards stuck right onto the board itself.

          So you are getting up to date architecture with up to date drivers and all that jazz, just drastically slimmed down.

          With Intel’s graphics…no telling, except that what you’ve got was never based on a performance part of any sort, and the drivers are no help.

        • designerfx
        • 12 years ago

        Those “things they work fine on” require no graphics processing. Thus, even not having an IGP would be absolutely fine.

        Of course, when reality comes around and things are not software rendered, your FPS goes to crap.

        Achieving a sliver of the performance capability shows that they are just continuing to fail in that sector.

        • ludi
        • 12 years ago

        Uhm, no. Intel bought Real3D way back when, and sold their Starfighgter product as the Intel i740 discrete graphics card. The i740 technology reappeared in Intel’s first mainstream IGP chipset, the i810, and it has been progressively updated ever since to maintain nominal compatibility with the basic features of whatever the latest DirectX verison happens to be, while being a dog in absolute terms.

        As other posters have now noted, Intel considers this good enough to promote as suitable for video rendering and gaming performance, in spite of the fact that the competition has shown itself capable of making faster IGP chipsets with better power envelopes.

        The only reason Intel gets away with this is that the system manufacturers find it very convenient to buy the entire core hardware platform from one vendor, and of course that will be Intel more than half the time since Intel has excellent mobile CPUs.

      • blubje
      • 12 years ago

      Special-purpose and general-purpose processors have been passing each other in performance/complexity since at least 1968, when Myer and Sutherland wrote about it in CACM and coined the term for it: “wheel of reincarnation”.

      But it doesn’t matter if your customers are “stupid” or don’t remember their history — if it’s faster and cheaper, they’ll buy it.

    • Flying Fox
    • 12 years ago
    • cegras
    • 12 years ago

    Cool, no more panic when the GPU breaks. At least, that’s the biggest plus I see here. As well as accidental flash bricking.

      • derFunkenstein
      • 12 years ago

      If you brick it flashing you’ll probably still be stuck, as you at least need a way to get the frame buffer out of memory and onto your monitor.

Pin It on Pinterest

Share This