Intel gets serious about graphics for gaming

Last week at GDC, Intel held a press event to showcase the latest developments related to its integrated HD Graphics solutions. That fact alone may not be remarkable, but what followed was a break with the past in many ways.

Intel has often sought to lower expectations for PC gaming in order to carve out space for its graphics solutions, usually taking time to remind all involved of the vast popularity of titles like Farmville. This time, had we not known Intel was doing the talking, we might well have guessed that the host was Nvidia or AMD. 3D graphics was the focus, and truly compelling visuals were on display, with a pre-release game—GRID 2—running on pre-release laptops sporting "fourth-generation Core" hardware. (The word "Haswell" was evidently verboten, though we all knew the score.)

Intel kicked off the event by announcing the impending release of a new graphics driver, revision 15.31, due this week. (I don’t see the driver available for download as I write, but it should be online in the next few days.) The firm counts this driver as the seventh update since the release of its second-gen Core processors and part of a program of continuous improvements in power efficiency and performance. The 15.31 driver promises graphics performance gains of "up to 10%," especially in specific games, and it adds support for OpenCL version 1.2. The driver will be available for Ivy Bridge-based IGPs, although it was built primarily with Haswell—oops, fourth-gen Core—in mind.

Next, former AMD developer relations guru Richard Huddy took the stage. Huddy now works at Intel in a similar dev-rel capacity, one more sign that Intel is getting serious about playing in the PC gaming market.

Huddy said Intel’s fourth-gen Core processors will support the very latest incarnation of the DirectX standard, version 11.1, and then he took things a step further by introducing a couple of new "extensions" to DirectX pioneered by Intel. I put "extensions" in quotation marks because DirectX doesn’t work like OpenGL, where the API can be extended by graphics vendors pretty freely. True extensions to DirectX require the blessing of Microsoft and generally aren’t added to the spec until a new revision is made official. What Intel has done is, taking a page from the well-established graphics chipmaker’s playbook, created proprietary hooks in its drivers to support new hardware features in the Haswell IGP. The firm is encouraging game developers to use them, and it will attempt to persuade Microsoft to incorporate the changes into DirectX when the time is right.

The first of these additions, dubbed InstantAccess, allows the CPU to read and write memory locations controlled by the IGP. This sort of cross-pollination between CPU and IGP is something AMD has explored in its APU development, as well, and it makes sense for chips with both CPUs and GPUs onboard. Huddy cited a couple of uses for InstantAccess: the CPU could hand off assets for the GPU to use, or it could read from GPU memory to assist with post-process effects.

The other new feature, PixelSync, gives the programmer control over the ordering of operations across pixel pipelines. Among other things, PixelSync should enable much more capable implementations of adaptive order-independent transparency, which Huddy cited as one of the more difficult challenges in real-time graphics today. He claimed PixelSync will allow correct lighting and shadowing for transparent objects with solid and predictable performance.

Intel has released code samples and documentation for these new features on its website, but it has also taken things a step further by working with the racing game mavens at CodeMasters to integrate some snazzy new visuals, enabled by PixelSync, into its upcoming title, GRID 2.

One of those effects is, well, better pollution rendering. The smoke that occasionally pours out of the back of the cars in GRID 2 is, by default, white and puffy and looks like cotton. With the help of PixelSync, though, CodeMasters has added semi-transparent smoke that is self-shadowing and looks, well, kind of sooty.

Also enhanced is the game’s foliage, which can be semi-transparent and correctly lit with the assistance of PixelSync. Although the picture above looks a little like trees coated with magma, the purpose of the reddish pixels is to demonstrate where transparency and blending are being applied.

In a somewhat too enthusiastic expression of support, the CodeMasters rep on hand ended his brief talk with the declaration that GRID 2 "runs best on Intel." That statement raises all sorts of questions about how Haswell compares to, say, the GeForce Titan. We must admit, we did not lose sleep wondering about the answer.

With that said, we’re just happy to see Intel acting like a graphics company, even if that means a little cheesy marketing here and there.

The next item on the agenda was Intel’s QuickSync video acceleration tech, which has been around for a while but hasn’t seen widespread integration into open-source video editing tools. Intel realized its initial QuickSync licensing terms were not open-source friendly, so it announced at this year’s CES that it would be altering them for fourth-gen Core processors.

Those efforts are already bearing fruit with the most notable open-source video processing software package, Handbrake, which was the surprise winner of our PC transcoding hardware roundup last summer. Two of the program’s key developers, John Stebbins and Tim Walker, were on hand to demo an early build of Handbrake using QuickSync video encoding.

In the demo, the encoding process was distributed across multiple Haswell units. The CPU showed about 35-40% utilization. The graphic execution units were about 40% occupied, and the hardware codec engine was about 35% occupied. We didn’t get exact encoding speeds, but the conversion was said to be happening at rates "much better" than real-time playback—and this was on an Ultrabook. Stebbins and Walker were cautious in their assessments of the quality of the encoded video. One of them called it "very good" but then noted that he hadn’t tried low-bitrate encoding yet.

Handbrake’s QuickSync support is still in the early stages, and we don’t have a precise time frame for its release. We expect lots of PC users to be clamoring for this one as soon as it’s available, provided the image quality lives up to Handbrake’s usual standards.

Intel ended the event by talking about a new revision of one of its developer tools for graphics and about its perceptual computing challenge, an attempt to find new applications for natural human interfaces like gestures and facial tracking.

Curiously, Farmville was not mentioned once.

Comments closed
    • USAFTW
    • 7 years ago

    Desperate, and hilarious. even if they implemented it, would their still relatively weak hardware be able to crank playable frames? The money would be better spent on design and hardware capability improvement and there would be plenty left for this guff.
    I mean, what’s the point. If NV and AMD are unable to do it because of extension limits and the fact that it’s Intel exclusive, and Intel can’t do it because they just can’t with current and future hardware, then whose this for?
    And I would realize Grid 2 comes not far from Haswell is scheduled to launch. So the world should wait for 2 gen beyond Has’ to enjoy this “runs best on intel” title. And I have a hard time believing that my 5870 won’t keep up with Haswell graphics 4000.
    So, Epic Fail.

    • NovusBogus
    • 7 years ago

    I’ll believe it when I see it, this is not the first time Intel’s tried to convince everyone they were going to do graphics fo realz. A tailored demo using magic hardware does not a successful GPU make.

    • Geonerd
    • 7 years ago

    Maybe people would take Intel ‘seriously’ if Mooley Eden demonstrated a high resolution racing game running in real time! Oh, wait. He already did that. (Well, didn’t.)

    Queue cynical, uncontrollable laughter. 😀

    • ronch
    • 7 years ago

    About time. ATI has been serious about graphics since 1985, and Nvidia, since 1993.

    • rootheday3
    • 7 years ago

    [i<]Full disclosure: I work in the Intel graphics driver team; the statements that follow are my opinion and are not Intel's official opinion… they are also pretty long..[/i<] The primary goal of real time graphics research is to innovate techniques and algorithms and give them to ISVs in a way that lets them create better games - for example, allowing implementation of a technique that would otherwise not be possible at all OR to make a technique less costly in rendering power required so that it can be enabled on a broader class of devices than the previous "brute force" technique OR by providing a reusable "worked solution" to a common rendering problem. In some cases, innovation can be done within existing specs - for example, by creating a library to do image processing with DirectCompute or OpenCL. Intel has contributed here in the past with things like MLAA. In other cases, the innovation requires going beyond the existing specs. Since specs are long lived, the owners of the core specifications (Microsoft and Khronos) tend to be relatively conservative about adopting new things - they don't want their standard to get cluttered with crufty stuff that noone uses or that has spec bugs. They want some confidence that new proposals add signficant value (e.g. will be used by ISVs) and have reliable, well defined semantics in one or more hardware vendor implementations. This has the potential to create a chicken-and-egg situation. To break the deadlock, hardware vendors have historically innovated via extensions which then demonstrate the value and make the behavior clear; the standards then evolve to subsume the extension. In APIs like OpenGL and OpenCLthat provide an extension mechanism, Intel has already implemented extensions and published the extension specs - see cl_intel_dx9_media_sharing, for example. Once the standards body sees the vendor extension is valuable, they promote it - see gl_khr_dx9_media_sharing for example. I expect that Intel will eventually release OpenGL or OpenCL versions of PixelSync and other future DirectX extensions. In some cases, Intel may release the extension first on DirectX (because of the larger pool of game developers) but in other cases, Intel may release the extension first on OpenGl or OpenCl . See gl_intel_map_texture for example - this is essentially the same functionality as the new DirectX InstantAccess extension and was implemented in OpenGL first for Rage to allow faster texture updates for megatexturing in 2011 and formally released as an extension in 2012. In DirectX the process looks a bit different. Since there isn't a formal extension mechanism, hardware vendors have had to find creative ways to implement extensions. In DirectX 9, ATI and Nvidia used the DX9 4CC "custom texture formats" mechanism to give the driver "hints" about what behavior to use. A catalog of DX9 extensions can be found [url=http://aras-p.info/texts/D3D9GPUHacks.html<]here[/url<] These earlier DirectX extensions didn't create fragmentation in the ecosystem because their behavior was published in an external specification that allowed other hardware vendors to implement the same functionality. The success of these extensions in the industry led to them being incorporated almost verbatim into DX10 /10.1- ATI1N/2N became BC4/5, Fetch4 became gather4, DF16 and DF24 became sample_c (percentage closer filtering), ATOC became AlphaToCoverage, NULL render target became an allowed pipeline configuration, etc. The new Intel extensions are similar in spirit and innovation to these earlier ones. Other IHVs are free to implement similar functionality if they choose. Intel works with Microsoft and Khronos and, while I am not privy to those conversations, I am pretty sure that these extensions and others are being discussed. The extension behavior and external interface are documented and examples are published: [url=http://software.intel.com/en-us/blogs/2013/03/27/adaptive-volumetric-shadow-maps<]PixelSync for volume shadow map generation[/url<] [url=http://software.intel.com/en-us/blogs/2013/03/27/programmable-blend-with-pixel-shader-ordering<]PixelSync for programmable blending[/url<] [url=http://software.intel.com/en-us/blogs/2013/03/27/cpu-texture-compositing-with-direct-resource-access<]InstantAccess for texture compositing on the CPU[/url<] InstantAccess lets ISVs avoid some OS/API overhead for managing and updating textures, vertex buffers, etc. That OS/API overhead is needed for discrete graphics because discrete GPUs have their own separate memory and explicit copies must be made between CPU and GPU memory. In a unified memory system where GPU and CPU share the same physical memory, avoiding these copies is a performance win and lets the ISVs work more like they do on consoles. InstantAccess allows developers to take advantage of this architectural advantage that integrated graphics devices have. PixelSync is a bit different. Traditional 3D rendering guarantees that triangles flowing down the pipeline are guaranteed to rendered "as if they were in order" - that is, if multiple triangles are in flight in the rendering hardware and two of them touch the same pixel in the render target, then depth/stencil operations and blending must be retired in triangle order. Doing so ensures deterministic rendering behavior with operations that are not commutative. As of DirectX11, pixel shaders can read and write to arbitrary locations in buffers (UnorderedAccessViews) which allows a lot of cool "compute" to be done in pixel shaders. However because the pixel shaders are ahead of the pixel back end, normal hardware scoreboarding techniques for the pixel back end do not provide any kind of ordering guarantees for UAV reads/writes happening on multiple pixel shader threads that hit the same pixel. Thus, techniques where the UAVs use the current pixel coordinates to index into the UAV buffer may happen out of order. This extension allows developers to request that the hardware apply pixel ordering rules earlier than the pixel backend and enforce ordering of UAV accesses at some point in the pixel shader. Without this guarantee of ordering, the only way to ensure that multiple pixel shader threads don't clobber each other is to do multiple passes sorted by depth (and still risk having artifacts if you don't have enough passes) OR to have arbitarily larger UAV memory buffers to allow any potential collisions to be avoided and then an additional pass to combine them (render target size * max depth complexity). These alternatives are very expensive to performance of algorithms like programmable blending, order independent transparency, adaptive volumetric shadow maps. PixelSync lets those algorithms run safely and efficiently at fixed memory cost. In both cases, Intel is using innovation to replace "brute force" with a more elegant solution to allow higher performance and higher quality effects than would otherwise be the case. Hope this helps clarify things a bit.

      • Visigoth
      • 7 years ago

      Thank you for taking your time informing us poor souls of your very valuable work over at Intel. Believe me when I say your efforts are very much appreciated by those of us who stick with Intel IGP’s (for their energy efficient use, QuickSync, etc.).

      It will matter even more when Haswell Ultrabooks come out, even if some of us will only play Age Of Empires 2 HD.

      • lycium
      • 7 years ago

      I prefer to credit Reshetov himself for MLAA (among much other great research). Intel have a lot of great ray tracing research going on generally, which is a great boon for me (working on commercial ray tracing engine).

      Thanks for the interesting post!

      • NeelyCam
      • 7 years ago

      I actually read through all of it, and was able to pick up [i<]some[/i<] bits/pieces... Thank you for taking the time to write such an elaborate explanation on these techniques.

    • zzz
    • 7 years ago

    I’m not really surprised about Intel championing their newest graphics tech; if you’ve been keeping up you’d notice that IPC for their cpus is crawling forward but their GPU tech is leaping forward each gen. They’ve hit the point where their GPU can at least play modern games (albeit on the lowest of all settings) where previously there were hoops upon hoops to jump through to even make a game play via their hardware (Google source (or any game based on it) and Intel atom n570). At best Intel’s R&D is fantastic, at worse we assume every modern game is simply a console port and thus is based on aged hardware, I suspect the answer is somewhere inbetween.

    • chuckula
    • 7 years ago

    Well guys, I have news for you: The future is in Tablets! (Haven’t you heard?)

    Here’s what the supposedly crippled HD graphics in an old Ivy Bridge do to the vaunted iPad 4, and this is when Windows 8 is being used on the poor Ivy Bridge: [url<]http://www.anandtech.com/show/6876/microsofts-surface-pro-vs-android-devices-in-3dmark[/url<] So let's assume the iPad 5 has twice the GPU power... still loses to a 2012 model performance crippled Ivy. Slightly more seriously: I've been saying for over 2 years that Intel is out to get its GPU parts running in low-power & mobile environments. Intel [b<]does not care[/b<] that your 100 watt desktop Trinity can beat an HD graphics part at GPU while getting destroyed at CPU and still losing easily to a regular $75 GPU. From the relative sales figures that Intel and AMD post, it looks like a bunch of people agree with Intel. Get down into mobile, and Intel's supposedly crappy parts start looking a whole lot less crappy compared to what other people are able to do when they no longer have blank checks in the power envelope. This is where Haswell will be dangerous and put a real damper on any ideas that ARM is just going to waltz in and take over everything at the low end.

      • windwalker
      • 7 years ago

      O rly?
      Those fools at Microsoft, what ever were they thinking using that crippled old Ivy Bridge.
      Everyone knows performance is king so they should have used Xeons.

      Good thing they included a keyboard to turn it into a real computer, just like the dozens of … dozens tablets sold with Windows XP Tablet Edition.
      And of course, some cooling fans. That’s what real men use for their CPUs that cost as much as an entire iPad mini.

      • tipoo
      • 7 years ago

      Well lets see, a chip with a 17W TDP performs better than chips in the single digit watts, the performance gains being up to 3x. Am I supposed to be impressed? I was actually surprised the other way, I thought SoC GPUs were still much further behind than just 3x.

        • Spunjji
        • 7 years ago

        This. This gets interesting when Intel stick their HD4000 class graphics into the 22nm Atom. The CPU performance will definitely be competitive, no doubt about that, but I’m not so sure about GPU given that it will have:
        1) Less shaders, and
        2) A lot less thermal headroom.

          • willmore
          • 7 years ago

          The rumors are that VV will have a IVB “type” graphics unit, but it’s expected to be highly stripped down–like to 1/4 the normal size. Again, just rumors, but they’re rumors based on looking that the already released Linux graphics driver code, so.

            • mczak
            • 7 years ago

            I don’t think that you can call Valleyview having IVB-like graphics unit (which is what intel calls Gen 7) a “rumor”. This is pretty much confirmed, and the leaks were also saying 4 EUs (compared to IVB’s 16 in HD4000 and 6 in HD2500). It is difficult to tell how fast that’ll be, with similar clocks it should be maybe around half as fast as that HD4000 (there’s nowhere near linear scaling between the 6 EUs in HD2500 and the 16 in HD4000), though the leaks didn’t mention any clocks (the HD4000 in ULV parts while having very high maximum clock doesn’t actually achieve anywhere close to the max clock usually in practice due to thermal and power limits).
            But in any case comparing HD4000 to SGX series 5 graphics in iPad4 is also a bit unfair, since the former is a full dx11 part, whereas the latter is essentially dx9 with additional corner cuttting (that is, not dx9 compliant – the sgx545 though is but that’s not a fast part). We’ll see how it stacks up against series 6 “rogue” sgx graphics – if I’d have to guess I’d suspect Valleyview gpu will be competitive with low-end series 6 parts (which are probably no faster than the high-end series 5 parts found today) but there will be chips out there with quite a bit faster ones.

    • HisDivineOrder
    • 7 years ago

    I’ve heard this song and dance before from Intel. I fully expect one day they’ll catch up to the glacial update pace of consoles, but honestly the consoles have been sitting on their hands for years now. Intel should have ALREADY lapped them by now.

    What Intel really needs to get serious about is making a driver that isn’t about a specific game, but about updating their GPU’s to run ALL games. They should also unify their GPU’s so they all take the same drivers and are the same basic hardware with some faster or slower, some with MORE hardware maybe, but the underlying tech is all the same.

    Their strategy is disjointed and haphazard.

    And is there any company Codemasters won’t sell themselves out to? Seriously. “Best on Intel?” Really? REALLY?

    Can’t wait to see that one play out.

      • tipoo
      • 7 years ago

      I think the HD4000 already surpasses the PS360 GPUs, or at least it does on paper in Gflops. Of course the console chips are taken better advantage of, but I think Intel has already beat them. Even next gen SoCs in tablets will be getting close, with GPUs up around 200Gflops.

      That said, just as Intel is getting in that ballpark the PS4 and maybe Durango will be coming out, and at 1.84Tflops will again be a high target that Intel will take a while to hit.

    • sarahatler008
    • 7 years ago
    • spigzone
    • 7 years ago

    Can’t wait to see what Kaveri brings to the ‘game’.

    • spigzone
    • 7 years ago

    AMD – not looking so harmless anymore.

    • PenGun
    • 7 years ago

    Wow. The longest article I’ve ever seen here, do you get paid by the word? I joke but this is a puff piece.

    Whoo hoo I can transcode stuff a bit faster. I actually quit doing that when it became mainstream. I still have a brace of scripts for mencoder, but now ffmpeg I can guess at and get right most of the time.

    What is this hand brake you speak of. I have cranked on brakes on a train, is that close?

    So how do I get 8 up and 20 down instantly? I expect a much better ratio soon.

    • kvndoom
    • 7 years ago

    Intel has been serious about gaming graphics since the 90’s. People can’t be seriously holding their breath.

      • MadManOriginal
      • 7 years ago

      Maybe they’re pulling the world’s longest April fool’s joke.

    • Cyco-Dude
    • 7 years ago

    …pollution rendering. lol, those are the tires smoking as the car skids into the corner.

      • willmore
      • 7 years ago

      And burnt rubber isn’t pollution, somehow?

    • Shambles
    • 7 years ago

    Serious enough to strip down the desktop Haswell IGPs to weaker parts.

      • rootheday3
      • 7 years ago

      Your statement implies that you think there is a viable market with good margins for Intel to pursue selling high end integrated graphics in the desktop space. Let’s test that out: what price premium would you be willing to pay to have high end integrated graphics in a desktop CPU?

      Suppose Intel had two quad core cpus with identical CPU performance. One has performance like today’s HD 4000 or a bit better. This die is ~200mm2, of which graphics consitutes ~60mm2. Intel sells this CPU for $180 or ~$0.90/mm2. The second has double the graphics performance and is 60mm2 larger. To not take a margin hit, Intel would need to price the second chip at $234; perhaps a bit more to compensate for lower yields on the larger die size. Are you willing to pay ~$50-60 more for the higher integrated graphics? or would you buy the cheaper one and install a discrete card for $100-200 because 2x HD4000 still isn’t good enough?

      Or do you expect Intel to take a margin hit and give you higher graphics performance for free?

      Note that in mobile the story is different – the volume of the device, cooling solutions, platform integration complexity, etc are make having premium integrated graphics a viable business model.

        • Celess
        • 7 years ago

        Do your parents know you are out harassing your customers?

        • chuckula
        • 7 years ago

        Stop using logic! We want crippled desktop CPUs with IGPs that are almost – but not quite – as fast as $75 discrete GPUs and WE WANT THEM NOW!

        Oh, and raise the price while you’re at it!

        • Andrew Lauritzen
        • 7 years ago

        Not really sure why this was down-voted… I agree it’s far from clear that the “high end desktop APU” market actually exists. Certainly AMD has yet to find significant success/margins there, so who are these theoretical users who would buy such a part? Seems to be a compromise with few benefits.

    • OU812
    • 7 years ago

    Scott: Please run FCAT on Intel HD graphics to see if Intel is legit on their FPS numbers or if their drivers are playing fast & loose by generating runts or drops.

    • Bensam123
    • 7 years ago

    I hope this doesn’t lead to a split in the DX standard, where AMD/Nvidia/Intel start sporting their own little proprietary quirks. I know DX has stagnated and I’m quite sad by that, but a split definitely wouldn’t be good for the lazy devs in the game industry, considerably so when they’re waking up from their consolization hibernation.

    The latest trend in proprietary physics would probably a good indication of why this is a bad idea (no one really uses them at the end of the day).

      • Airmantharp
      • 7 years ago

      PhysX was a great product, and I’m still not sure why Nvidia killed it in it’s infancy. There was a lot of potential there for complex physics to become a part of all aspects of gameplay and not just for a boost in eye candy.

      As for DX; we know better. Microsoft won’t split it, and that Intel is doing this crap now strikes me as kind of silly. It’s a PR move more than anything else; because this stuff relies on extraneous hardware calls, it would only run on an Intel GPU, so unless developers target that performance envelope and find that these functions provide some performance advantage worth the development effort, they’ll likely ignore it.

      The memory access thing sounds cool, as it appears to allow developers to better use the iGPU as a coprocessor, but I’m waiting on real proofs of concept here.

        • Bensam123
        • 7 years ago

        YES! PhysX had sooo much potential when it was introduced and all the hype that built up with it was well deserved, but game devs simply weren’t interested and then Nvidia lobotomized it after buying it and everything died out.

        Aye, I’m not worried about MS splitting it, I’m worried about hardware companies doing what Intel is doing and developing their own ‘extensions’ which are pretty much splitting it on there own. No one says you need to exactly follow DX specifications, just like with specifications for other standards, ATX is a good example. Then it just turns into AMD/Nvidia/Intel trying to pull game developers each different way to try and get them on board with their own set of specifications (Dirt2 is a pretty good example). It’s not good for anyone.

        I agree some of these things seem cool and MS really should be adding these to DX instead of sitting on the standard because their precious console is still two standards behind. Coincidentally (or not so much) DX revisions stopped happening after consolization took hold as well.

        Ideally I think game developers and graphics card manufacturers should be pushing for OpenGL instead. It’s almost identical to DX, but it’s open and it’s still improving. It completely removes MS and biases trying to pull a standard in their favor.

          • PixelArmy
          • 7 years ago

          [quote<]PhysX had sooo much potential when it was introduced and all the hype that built up with it was well deserved... [/quote<] No, it did not. That is a romanticized view fueled by anti-nvidia sentiment. Here's their first release of AGEIA PhysX as reviewed by TR: [url<]https://techreport.com/review/10223/ageia-physx-physics-processing-unit/10.[/url<] It has all the complaints that everyone reposts every time PhysX comes up (overuse of particle effects, wavy cloth, etc.)

            • Spunjji
            • 7 years ago

            You’ve not entirely dismissed his claim that it had potential there. At the outset the results of its implementation were not stunning – it’s unclear how that might have changed had time gone on.

            • PixelArmy
            • 7 years ago

            Perhaps not… I obviously cannot claim there was absolutely no potential.

            Only that upon initial release, TR alludes to the fact that a modern CPU (common) can do almost everything that this new add-on card can do, leaving very little room on the potential side. [b<]Certainly not "sooo [sic] much potential". I mean so much more that it needed two extra o's![/b<] So, if not the card, how about the API? Well I contend it's potential was just that of [i<]any other[/i<] physics engine (even in the review, the more interesting stuff used Havok). I'm just pushing for some sort of backing to these claims cause I see them posted over and over without any, nor any history of it being true other than people just stating over and over in these threads.

            • Airmantharp
            • 7 years ago

            I’m not sure if there’s any left; we remember where PhysX was going, but it never got there- those bridges got burnt. The promise was that physics processing in games would be used as an integral data source for game engines, not just a tack-on for graphics like it is today.

            • Bensam123
            • 7 years ago

            Aye, at least they’re using it for eye candy now though. That is a rather new development. For awhile, physics weren’t even used for eye candy and now that they’re starting to use it (again) for eye candy is a good thing. It’s a start.

            • NeelyCam
            • 7 years ago

            You say “aye” one more time, and I’ll put a cutlass in your eye

            • Bensam123
            • 7 years ago

            Guess the hormone therapy isn’t going well Tiff? :l

            • Bensam123
            • 7 years ago

            You’re talking about the PPU again not the software. Two different things.

            • Bensam123
            • 7 years ago

            That’s neat? I read the reviews when they first came out here at TR. It tests a couple games and makes claims based on the hardware. The review was exclusively about the hardware.

            I’m not talking about the hardware, I’m talking about the prospect of physics in games and the revolution PhysX was heading up before it died out and Nvidia butchered it.

            Ageia was more then the PPU they were selling, they also produced a very good software SDK that was free to use by developers (royalty free) and allowed them to easily add physics elements to their game. One of the best legacys left by this time period is Men of War and all it’s iterations. The first version was called of Faces of War and it was one of the first games to use physics to any sort of notable level. Try it out.

            • PixelArmy
            • 7 years ago

            Leading the revolution? All the non-eye candy physics you’re always alluding to was almost never done in PhysX in the first place, games you list are exceptions. Faces of War is also listed at 2006, 2 years after HL2 used a modified Havok. I’m not trying to praise Havok, but it had a 4 year lead over your proclaimed “leader” PhysX. Heck, the PPU launch game used Havok for “gameplay physics”. I’d say the hardware certainly affected the software’s prospects.

            But, I get it… You have a fondness for Faces/Men of War and that used physics well. Its physics were done with PhysX, so PhysX is the leader!

            You dance around with generalities that apply to all physics middleware… Sooo, what potential did the PhysX software have [u<]that wasn't generic to all physics software[/u<] (mind you, that already existed at the time like Havok)? Sounds like the only thing differentiating it was being royalty free...

            • Bensam123
            • 7 years ago

            I never said other games didn’t have physics, but if you consider physics jumping up and coming down, a lot has to be desired. Tribes 2 had a physics engine too… and that was two years before Half-Life source.

            What PhysX did was give game developers a free SDK that they could use to integrate into their games and call on predetermined functions. That means they don’t have to program a physics engine from the ground up. Half-Life Source, Tribes 2 do not offer this. They’re game engines, not physic engines even if they have some integrated functionality.

            Creative did the same thing with EAX and OpenAL (although that died off). It means developers need to do less work and can put greater functionality in their games. Any game developer can add increased physics or whatever to their games, heck game developers don’t even need to program for directx! They can directly call all the hardware, but that doesn’t mean it’s the best course of action or the most beneficial one.

            They very much DID lead the physics revolution as no one did this before them. Just the same as Creative lead the sound revolution, which also died out with them.

            Havok was just eye candy. Half-Life source uses a heavily modified version of the Havok engine at the time. Havok normally was non-interactable, meaning no one else you were playing with could see what you were seeing or interact with it. Like in Source when you break a box or something and there are bits on the ground, the bits you’re pushing around are different then bits people see on their screen. Valve heavily modified the engine so you could do things like pickup and drop boxes and other people would see it, but that was still extremely limited.

            Good example, the end of matches in CS when the bomb blows up and there is like 8 different objects exploding all over the place and the game chokes. It can’t handle any sort of meaningful simulations beyond simple player interactions with objects.

            Havok has changed over the years, but it’s been a very long time since PhysX introduced that functionality. In Men of War you can drive through buildings and it actively changes and updates on other players screens you’re playing with. The level of physic interaction is completely different.

            And yes, PhysX was royalty free… it still is, but Nvidia did a number on it.

            So what did PhysX offer that other solutions didn’t?

            -A completely programmable physics engine and SDK that could be easily called by developers and they don’t need to develop in house.
            -A scalable physics environment that doesn’t choke on itself.
            -Interactive physics (not just eye candy)
            -Royalty free

            All of those are extremely big points, especially the first one.

            • PixelArmy
            • 7 years ago

            I’m calling total BS on Havok was non-interactive… perhaps not totally useful gameplay-wise (these are developer decisions anyways) but the were interactive…

            Ex. Max Payne 2 (before even HL2)
            – Knocking enemies down with doors
            – Pushing objects around for minor cover
            – Jumping through walls of boxes in dream sequences.

            And of course Havok had a separate engine and SDK! Nowadays PhysX is integrated into say things like Unreal Engine that give it an advantage now, but that certainly wasn’t the case for the time period in discussion.

            I’m not sure about an actual scalability comparison, but again Max Payne 2 supposedly had a physics representation for all the objects in the world… (Not to mention, scaling everything up for total destruction wouldn’t even make sense for most games, if that is your criteria.)

            Creative’s didn’t even create OpenAL, and their rise/fall in any way to PhysX. Non-sequitur… I’m not even sure what the logic is here… Creative was a leader and they died out. Ageia PhysX died out, so they were the leader? (I never said the leader can’t die out).

            Well, at least you [i<]finally[/i<] listed some actually points to debate... At most, you have "royalty free" and "scalability" which is tenuous at best.

            • Bensam123
            • 7 years ago

            Max Payne (any iteration) isn’t multiplayer. A big aspect for doing a physics engine well is syncing data with other players. Essentially each and every update for the object has to be sent to other player, so the more objects you have, the more updates that need to be sent out. Physic engines have different ways of dealing with this, but it’s quite similar. Each machine won’t produce the same exact simulation (due to results being estimated and rounded, real physics take super computers), so the results need to be synced. So in a lot of ways you can get a decent physics simulation, but you can’t do it in multiplayer at all. It’s a big hurdle for developing a physics engine (also part of why the Source engine chokes during massive high speed collisions between rigid objects).

            [url<]http://www.youtube.com/watch?v=rlfQISXdBrI[/url<] Looking at that video it doesn't appear that any of that is interactive. You can push the stuff around, but you can't actually stand on it, I doubt you could stack any of those items, and they definitely don't function as cover. They appear to have absolutely no weight and don't actually impact the character when he runs into them. In other words they're little doodads designed to make things look pretty. If the object doesn't actually impact the character when he interacts with it, there is relatively little going on in terms of simulation. I highly doubt everything in that world is simulated. There are a couple doodads and knick knack boxes you can move around. I really don't think you remember the hallmark game for PhysX, Cellfactor... [url<]http://www.youtube.com/watch?v=AELtvnmMjCQ[/url<] [url<]http://www.youtube.com/watch?v=MBNHPHVbQts[/url<] (That is all multiplayer) There used to be better videos showcasing the features, but the shear amount of rigid bodies it can do is immense (this was back in 2006). Interactive rigid bodies at that. The rigid bodies interact with players, they even get inside vehicles and get thrown around. Players interact and push other players around. You can't look at something like that and tell me moving some boxes around in Half-Life source is nearly the same. There is no way. You're looking at hundreds of interactive objects, it's quite a bit different then 8-10. I'm not talking about Havok today. We're discussing PhysX when it came out and why I thought it lead a revolution in physics. I never said Creative made OpenAL, they ported EAX affects and their SDKs to OpenAL, it was the last attempt at giving developers a full fledged sound SDK with prebuilt effects. It wasn't directly applicable to PhysX. It was a comparison to another part of the game industry where a company flushed out and helped develop a certain aspect of it. Non-sequitur my ass, I'm sorry you don't know what a comparison is, I can't help you there. I listed four points, none of which you attempted to actually argue: -A completely programmable physics engine and SDK that could be easily called by developers and they don't need to develop in house. -A scalable physics environment that doesn't choke on itself. -Interactive physics (not just eye candy) -Royalty free -Multiplayer physics (we'll add that one too) Simply citing Max Payne as an example of what Havok could do (which is almost nothing) does not change what PhysX could offer. You don't even address the point of having a prebuilt and fully programmable physics engine that developers could use (which saves development time and money in addition to making a game more flushed out). Most games completely disregard any sort of meaningful physics now days and in the past. Dropping a box here or there is such a far cry from what physics can really do. And if you think royalty free is a small point, when you're spending a couple thousand or tens of thousands or hundreds of thousands of dollars to license the Havok engine (depending on volume and price), then I can't help you with that either.

            • PixelArmy
            • 7 years ago

            I understand we are talking about PhysX, but I am refuting your claims that it was the leader by showing you counter points, generally because Havok did it. Every time I do you make up something else, now it has to be multiplayer (note, depending on the game, synching may actually be irrelevant and these are more network issues than physics issues anyways).

            Me: Havok had roughly the same capabilities:
            You: No it didn’t have interactive graphics, PhysX is the leader there.
            Me: Havok did interactive physics (see Max Payne, HL2, etc, etc, etc.) before PhysX was even released.
            You: No, those don’t count cause they’re not multiplayer.

            I did address your points, you simply choose to ignore them:
            – Completely programmable physics engine and SDK. [b<]WTF do you think Havok came as?!??![/b<] - Scalable - I conceded I didn't have concrete data (mind you, you don't either), but I did say the majority of MP2 was backed with a physics representation. - Interactive graphics - I cited examples mainly with Max Payne 2. - Royalty free, I acknowledge that I do not know the licensing terms of everything and that royalty free is a big thing. However, I don't find evidence that it held people back. And Cell Factor was a tech demo, barely a game that was released way past all the examples I've shown. I mean seriously look at the timeline, Havok 1.0 (2000), MP2 (2003), HL2 (2004), PhysX 1.0 (2004), PPU + Cellfactor (2006), Faces of War (2006). Now, I know you're gonna simply reply all those don't count for made up reason #2375. (Probably core-parking).

            • Bensam123
            • 7 years ago

            Havok did nothing on the level PhysX did. That is the whole point of me posting those videos and making a bulleted list. Because Havok simply wasn’t capable of doing the same thing on nearly the same level.

            Do you have anywhere showing any sort of footage of Havok doing remotely the same thing from those time periods? No, you don’t. I posted the Max Payne which shows a handful of weightless boxes being pushed around on the ground and they don’t even interact with either Max (push him back) or the villains, they shoot through them to as if they aren’t even there.

            Moving a couple boxes around that don’t even interact with you is no where close to what PhysX was capable of doing and why it very much did lead the physics revolution. By it’s own right Tribes has a physics engine (which wasn’t Havok) and Half-Life Source had a modfied physics engine and they were no where close to the same level of what PhysX offered. Other games had physic engines too (you can even say jumping up and down is physics, because it is).

            Simply stating that Havok has a programmable SDK too does not refute my original point which is one of the reasons PhysX lead the physics revolution, both have SDKs. Offering a SDK is great and everything, but if that SDK doesn’t offer the same functionality (Maxpayne or Half-life Source vs Cellfactor), then it’s definitely inferior. We could even start throwing in other functionality here, such as fluid dynamics, soft body dynamics (Half-Life Source still can’t do this) and tearable cloth (still can’t do this either).

            How don’t I have concrete data? I linked you to videos showing hundreds of interactive rigid bodies and players being thrown around on the screen verse a couple boxes (which were like the doodads found on the ground in GRAW). I also cited Half-Life source explosion sequences, such as at the end of CS matches, where they try to throw around rigid bodies at high speed.

            Max Payne is eye candy, which I’ll state for the fourth time. Weightless boxes that don’t even interact with Max Payne (push him back) and don’t function as actual objects are not interactive.

            I really have no idea how you can state dumping a couple thousand, tens of thousand, or hundreds of thousands of your budget for a game into licensing fees wouldn’t hold people back. I’m really starting to question the legitimacy of the logic you’re using if you don’t understand basic budgeting.

            Cellfactor was released as a full fledge game (two different ones at that). The original cell factor people could play in and it was no different then Half-Life Deathmatch. It doesn’t even matter that it’s not a full fledged game either, we’re looking at functionality of PhysX and it just so happens that shows it off very well.

            Yup and the only one of those things you mentioned with remotely useful physics, Half-life Source didn’t happen till 2004 and the huge gap between physics in Half-Life Source and PhysX is very apparent. It doesn’t matter when Havok came around if it did absolutely nothing remotely useful. Not sure why you mention the PPU either, PhysX was scalable to as many CPU cores as you had available. It doesn’t matter if you did or didn’t have a PPU (also why it was amazing), if you read the TR article you linked you would know that.

            This is like arguing with people that though Aureal3D was on the same level as EAX. It should be very easy to tell when a product is completely superior to another one. I’m pretty sure you didn’t look at either of the videos I posted. There is absolutely no contest and arguing things like royalty fees and a more flushed out SDK are second rate to the end product.

            Adding a couple knick-knacks is not enough to lead a revolution, you need to be heading up the curve by a large margin and that’s what PhysX did.

            • PixelArmy
            • 7 years ago

            My issue is that your inference that a new PhysX games that used physics in a fashion that you like or agree to (fully destructive environment) means that PhysX was the leader.

            A lack of a fully destructive environment is often a game design issue, if you could simply destroy everything in every game, there wouldn’t be a purpose to most games. You allude to war games a lot where this level of destruction makes a lot more sense.

            Additionally, things like syncing multiplayer physics are network issues, it is a waste of bandwidth to send all the inputs over the network if it doesn’t make sense for the game. I’ll give you a PhysX example, Borderlands 2 kicks up rocks and stuff while driving, does everyone need the same rocks? Should I therefore infer PhysX sucks? Of course not.

            And for the fourth time, you’ll have to be corrected. And the video of MP2, you linked shows interactivity, shows lots of boxes, cans, ladders, etc taking bullets (meant for the characters) and getting knocked around… Here’s straight from Remedy’s mouth: [url<]http://www.gamespot.com/news/max-payne-2-qanda-exclusive-media-6075506[/url<] (here they talk about a HUGE amount of physics sim, and the level of SDK/engine polish). And to point out the timeline for the nth time, this is in use before PhysX has even been released. And I did waste my time watching the links you posted... you were complaining about the weightlessness of objects in Max Payne 2? But... but... but... look how many more objects are being manipulated... I sure hope so, your examples are 4 years later! The huge gap in Half-Life Source physics? PhysX wasn't even around yet! You're comparing 2003/2004 Havok games to 2006/2007 PhysX games! As for budgeting, while I've acknowledged over and over the royalty free thing is huge ([url=https://techreport.com/discussion/24562/nvidia-geforce-gtx-650-ti-boost-graphics-card-reviewed?post=719408<]in case you missed that as usual[/url<]), especially for people with smaller budgets.. I'm just adding, the price didn't appear to hold anyone back as there were a good number of games using it. Look, I agree, they were a leader (perhaps not all technical as the royalties thing showed) and maybe even the best, just that I don't think they were so far advanced as you claim. Your examples only affirm PhysX was good (which I tend to agree), where as I'm asking you to deny others were good as well [i<]and[/i<] by a "large margin". Your arguments there are speculative at best as your criteria is based largely on things that involve a mountain of factors outside the physics engine.

            • Bensam123
            • 7 years ago

            No… No, not even close. A fully destructable environment doesn’t depend completely on game design, the engine needs to be there in order to support it. If you don’t have the features available at your disposal, then you can’t add them to the game even though you can make custom code to create them, that’s exactly what I’m arguing against. Hence why I didn’t say Havok lead the physics revolution, I most definitely played HL2, just like a lot of other people.

            The sizeable valley between PhysX and everything else out at the time (including Havok) is why I said PhysX lead a physics revolution, because it did. Because what it offered was leaps and bounds over anything else on the market.

            You’re trying to stipulate now. Little non-interactive doodads like rocks are different then a full size pipe or vehicle flying at you out of no where. Don’t try to downplay the importance of physics. We both should be able to agree that physics are very meaningful in games and pretending you can just add small little things to games is the same thing as full blown interactive physics doesn’t do either of our arguments justice.

            They get knocked around, that doesn’t mean they absorb the bullets. You see max walking through the objects as if they’re not even there and they bounce comically around in the air as if having almost a negative weight. That’s not even close to an appropriate physics simulation. As I stated before you can’t even stand on or really interact with the boxes in MP2. Walking around in boxes and watching them pop in the air like confetti has very little meaning to me, especially when they don’t even interact with characters. That’s all eye candy.

            So, yes Max Payne has rigid bodies, if you can consider that an actual ‘simulation’ is extraordinarily questionable. I mean I walk around in real life all the time and have boxes pop up around me when I tap them with my feet. It’s quite marvelous.

            If you watched the videos I posted you can see those objects moving and interacting with the actual characters and characters inside vehicles. I don’t know why you’re comparing weightlessness of objects compared to number of rigid bodies and sheer number of objects that can be manipulated and think one point defeats another.

            Don’t try to widen the gap by posting a range now. Half-Life Source came out in 2004, PhysX in 2006 and they were still worlds apart. And even after then they still were. Havok didn’t even add cloth or soft body destruction till 2008 and STILL can’t do fluid simulations. You mention timelines yet here we are seven years later and Havok still can’t do fluid simulations. Half-Life source can because they use their own in house heavily modfied physics engine on top of Havok. I’m pretty sure what it’s using today is a shell of what they originally started out with (seeing as it can’t use any of the updated Havok features as well).

            “Royalty free, I acknowledge that I do not know the licensing terms of everything and that royalty free is a big thing. [b<]However, I don't find evidence that it held people back.[/b<]" This is so convoluted it's almost contradictory and you're still trying to argue it. You could probably ask any of those developers if missing 100,000 from their budget made a difference and they probably tell you it did. That's two full time employees for a full year, maybe more depending on what they're doing. On the low end that's a art budget for free lance artists (for games that have tiny budgets). You can't just go 'oh missing money isn't a big deal cause these games have it anyway'. Yeah, the mountain of factors I base my criteria on I added as bulleted points. Such as not choking on more then 8 rigid body objects and being able to do fluid and cloth simulation... Soft body destruction (that's another big one). MP2 and half-life source could do neither of those. The source engine has been updated over the years too and it still can't do soft body destruction or cloth simulations. It can barely do fluid simulations and it looks like jello. It lead the revolution because it was so far beyond the curve. That's what it takes and that's what it did. Regardless of who does what first, if it doesn't do it well or doesn't stand substantially above the competition then there is no way it can lead things.

        • rootheday3
        • 7 years ago

        see my post explaining why extensions (#75).

        Game developers are targeting this and there are “proofs of concept” – recommend you re-read the article and note that Grid 2 is PixelSync. Rome II: Total War is using InstantAccess.

      • tipoo
      • 7 years ago

      In what way has DX stagnated? Does its competitor have anything it cannot do? And is there any game whose graphics are limited by the standard rather than by hardware and development budgets?

        • Airmantharp
        • 7 years ago

        DirectX doesn’t really have a competitor. And considering that Crytek (in Crysis 3), UE4, and Frostbite 3 (in Battlefield 4) all run in DX11.1, it’s hard to say that we’re really limited.

        Future improvements will likely be for addressing higher fidelity (higher color range for processing accuracy) and compute. I’d expect DX Compute (or DirectCompute?) to bring physics processing and environmental sound processing into games as inputs for game logic and AI, among other things.

        • Bensam123
        • 7 years ago

        It hasn’t improved, that’d pretty much be the definition of stagnation.

        DX isn’t just about new eye candy. There was quite a bit of talk at one point about a physics SDK being integrated into DX (like PhysX and Havok), but that ceased. I mean that in itself could be an entirely new revision of DX, but instead we got DX 11.1, which has some minor tweaks to it, but nothing warranting a new revision or even a care.

        DX 11 added quite a bit of efficiency tweaks as well, as did DX 10.

        Development budget doesn’t factor into any of this. Just because it’s available doesn’t mean you need to use it. I’m arguing for progress. Game developers have creative freedom as always.

        A good chart: [url<]http://en.wikipedia.org/wiki/DirectX#History[/url<] Notice how there was a revision of DX almost every single year till we hit DX 9? On occasion it was faster then that.

          • Airmantharp
          • 7 years ago

          Up until DirectX9, Microsoft was trying to keep up with GPU vendors when it came to new features. They also fragmented it a little bit to support different feature sets.

          With DirectX9, ATi basically implemented Microsoft’s specification in hardware with the Radeon 9700Pro. With DirectX10, Nvidia did the same with the 8800GTX; DirectX11 was a tossup, as it was a direct super-set of DirectX10, and not a major rewrite.

          At this point we can expect Microsoft to extend DirectX when there’s support from the hardware vendors. Unfortunately, it’s probably being held up by Nvidia’s insistence on supporting the proprietary CUDA and PhysX technologies. I doubt Nvidia would want to give anyone a leg up by supporting real cross-platform physics and compute in any meaningful way.

            • Bensam123
            • 7 years ago

            That’s a shame. AMD and Intel should plow through that. OpenCL based physics would be a good start

            I don’t understand, why would DX be competing with GPU vendors for features? Unless it’s a ‘extension’, GPUs reside under the DX or OGL layer.

        • Deanjo
        • 7 years ago

        [quote<]Does its competitor have anything it cannot do?[/quote<] Yup, run on virtually any platform.

    • thanatos355
    • 7 years ago

    Anyone else read the title and start uncontrollably laughing?

      • Airmantharp
      • 7 years ago

      Not uncontrollably :).

      Really, it’s not that Intel GPU’s are generally slow- that’s actually okay- it’s that they’re not well optimized, nor does Intel work terribly close with developers to remedy that.

      Yet, Intel ships most of the GPU’s in systems that run desktop operating systems, and they have AMD hot on their heals with competitively priced (and better threaded) low-power CPU’s that come with excellent graphics. People are finding out quickly that they can get a system that’s great at everyday tasks while being effective for gaming with an AMD APU inside for less than they can get an Intel system with a competitive Nvidia GPU.

      So this is good news, if also a bit humorous!

        • indeego
        • 7 years ago

        I think hot on their heals is more funny.

          • destroy.all.monsters
          • 7 years ago

          The thought that intel, AMD and Nvidia are playing some kind of meta MMO amuses me. intel as the impotent healer just makes it funnier.

      • MadManOriginal
      • 7 years ago

      No, I think it’s great because I don’t tend to play only the latest and greatest games that require heavy-duty graphics, although there are a few so I still need a decent card. Raising the baseline for compact low power-draw systems and for laptops is useful…a very compact HTPC or laptop that can play older/less demanding/semi-‘casual’/indie games without a discrete card is nice, with Haswell if more demanding games are playable even on lower settings, even better.

        • Spunjji
        • 7 years ago

        The only problem is that they’re not (yet..?) bringing their GPU A-game on the systems you just described. Rrrrgh etc.

      • nico1982
      • 7 years ago

      No, I’m just perplexed because there’s nothing in the article to back up the claim in the title.

      • lycium
      • 7 years ago

      Not at all, they have some extremely good graphics researchers on their side.

      • Khali
      • 7 years ago

      Didn’t laugh but the thought “yeah, riiiight” crossed my mind. I wish Intel would get more serious about graphics. If they did it would boost the utility of laptops as gaming platforms without a dedicated graphics card just for gaming. Currently, laptops with Intel graphics are very limited on what games they will run.

      Plus a three way competition between AMD, Nvidia, and Intel would hopefully ramp up the development of faster more capable GPU’s. Plus, most important of all, result in lower prices due to more competition in field.

        • thanatos355
        • 7 years ago

        “I wish Intel would get more serious about graphics. If they did it would boost the utility of laptops as gaming platforms without a dedicated graphics card just for gaming. Currently, laptops with Intel graphics are very limited on what games they will run. ”

        Tell me about it! I have to constantly explain to my son why his laptop, that Grandma bought him for Christmas, can’t play the games he wants.

        Damn you Intel integrated graphics!

          • destroy.all.monsters
          • 7 years ago

          Don’t buy intel.

            • thanatos355
            • 7 years ago

            Grandma bought it for him, not dad. Dad doesn’t buy Intel laptops for anyone that needs more than web surfing, email, and MS Office.

            • destroy.all.monsters
            • 7 years ago

            So Pops don’t play. Nice of Grandma to do it but it sounds frustrating.

            I have a (now super old) Sony intel laptop with fairly early switchable (Nvidia 7k mobile graphics with a physical switch) graphics and it still plays World of Tanks and a decent amount of games fairly well. Bought for 250 or so 4 years ago. Sometimes used is the best all around deal.

          • rootheday3
          • 7 years ago

          Khali or thantos355: Do you have specific examples of the games that can’t play on an Intel HD 4000 with current drivers? Also, by “can’t play”, do you mean they don’t run at all? or that they are not playable at any settings? or not playable at high resolution/settings cranked?

            • thanatos355
            • 7 years ago

            I couldn’t comment about the HD4000, as that is not what’s in my son’s laptop.

            I personally run a GPU-less Intel processor in my main rig.

            My scorn and ridicule was directed at Intel’s generally laughable attempts at video output performance in general. Throughout the ages, if you will.

            Also, no need to get all fanboi about it.

            • chuckula
            • 7 years ago

            To be fair, I think that rootheday3 is collecting a bug report from you guys to see if there is anything they should be fixing…

            • rootheday3
            • 7 years ago

            Yeah, sorry if it came across as antagonistic- I am trying to get clarity if there is some issue for Intel to fix. I understand that people have had negative opinions based on experiences they have had in the past, but I and many others at Intel have been working very hard over the last couple years to improve compatibility, performance, and feature set. I hope people are willing to set aside their preconceptions and look objectively.

            Re Mass Effect series, the crash was due to game using a dx9 extension we didn’t support At that time (cant recall which offhand). We added support for that extension in spring 2010. Mass Effect should work on Intel HD Graphics (aka west mere) and later (hd2000,3000,2500,4000). You must have really old drivers.

            I will check on Oblivion when I have access to bug darabase

            • rootheday3
            • 7 years ago

            Bit more data – hopefully to convince you that Intel is serious about graphics…

            Re Mass Effect series, there used to be a crash due to game using a dx9 extension we didn’t support at that time (ATI2N). We added fixed that by adding support for that extension in spring 2010 (along with a bunch of others – ATI1N, NULLRT, INTZ, RESZ, DF16, ATOC; later Fetch4 and DF24). There was one other bug related to memory management that was fixed in a later driver. Mass Effect games should work on Intel HD Graphics (aka west mere) and later (hd2000,3000,2500,4000).

            AFAIK, Oblivion should work. There is one known game bug where it doesn’t render properly when HDR post processing is selected because the game does something along the lines of “If AMD or Nvidia … else <???>” and neglects to send vertex/pixel shaders on the post processing pass.

            Fortunately, there are only a few older games like this which “discriminate” against Intel graphics based on vendor id. Most have workarounds – search for Fallout 3 Intel HD Graphics for examples. There are also some games that incorrectly make decisions based on how much “dedicated” memory the OS reports – this is wrong for unified memory devices. Notable examples include Grand Theft Auto IV and some of the Pro Evolution Soccer games.

            I mentioned above that Intel started working hard at adding the missing features and fixing legacy bugs in 2010. At that time Intel also started actively working with the game developers – testing hundreds of pre-release games per year in our labs to ensure functionality and performance, application engineers working with the game developers to tune the games, etc. The examples listed in this article (Grid 2, Rome II) and in the release notes for the 15.31 drivers are a small sample of that effort.

            Because of that I expect we will much less of the sort of game bug issues mentioned above going forward.

            You can help by reporting issues to intel.com with clear reproduction instructions (OS, driver version, game/patch, BIOS). If they are driver bugs on current drivers on supported platforms they will be taken seriously. If they are BIOS bugs, we will refer you to the OEM/motherboard vendor. If they are game bugs, we forward to the relevant developers.

            • Khali
            • 7 years ago

            I could not get any of the Mass Effect games to work on my laptop. Oblivion did not work either. A few of them would start OK but a few minutes into the game it would lock up or CTD. Dragon Age Origins and Dragon Age 2 both work fine. I tried all four of those games at the lowest settings and it did not mater. ME 1 runs for about 15 minutes then locks up. ME 2 and 3 won’t run at all. Oblivion wont start up most of the time. When it does you get a black screen and thats it. I stopped trying to get games to run on it and pretty much only use the lap top for internet when I need to take it to my parents to look something up for them online.

            The lap top was intended for use when I am in the hospital from time to time. It serves its purpose but not as well as I wanted. Next time I know to get a dedicated GPU.

            • Airmantharp
            • 7 years ago

            I believe I got ME2 and possibly Oblivion running on my i7/HD3000 laptop- which one do you have?

            • Khali
            • 7 years ago

            My laptop is not here right now. I will have it back tomorrow and I will get the tech specs and install those games again to give you detailed information. Might take me a few days to get things installed again.

      • smilingcrow
      • 7 years ago

      It’s easy to laugh now at Intel with regard to graphics as it was to laugh at P4 back in the day. But if this is an indication that they are finally STARTING to get serious then if I was one of their competitors I wouldn’t be laughing too hard or for too long.
      Only the paranoid survive and I wouldn’t want Intel on my case if and when they get really serious.

    • tipoo
    • 7 years ago

    Now un-kill Project Offset, Intel.

    I’m still impressed by how that game looked for the year it was made in.

    • SoM
    • 7 years ago

    bring back 3DFX

    i loved my Voodoo

      • Chandalen
      • 7 years ago

      I buried my V5 5500 in the backyard when it let out the magic smoke.

      I miss having that card still. Oh well, looks like nV has put good use to the SLI tech so it’s not a complete wash.

        • WaltC
        • 7 years ago

        Hell of it is that I can’t remember what happened to my V5.5k…;)

        This is what I seem to remember…pardons if I got this wrong…nVidia’s use of SLI shares the acronym with 3dfx’s SLI (Since nVidia got the rights to the acronym when it bought the charred remains of 3dfx at a fire sale.) In the V2-V5 days, 3dfx’s “SLI” meant “ScanLine Interleaving.” nVidia doesn’t do that at all. Indeed “SLI” for nVidia means “Scalable Link Interface.” Like AMD’s Crossfire, the neo SLI uses each gpu to render whole bands of scanlines–say, gpu #1 does the top half of the frame and gpu #2 does the bottom. If there’s more going on in the bottom half of the screen than in the top half, then gpu #2 works harder and therefore slower than gpu #1 on each frame, and so the whole system is effectively slowed to the rate that gpu #2 can render each bottom half of each frame. Interleaving the scanlines always seemed to me a perfect solution for distributing the work among gpus in the most even fashion possible, as even on the bottom half of the frame where the “action” is, the two gpus would divide the workload evenly, each rendering every other scanline in every frame.

      • Veerappan
      • 7 years ago

      Still love my voodoo’s… I have both a Voodoo 3 2000 and 3000 (PCI) laying around. One was still being used in my parents’ computer as recently as a year ago. The other one is still plugged into my Alpha PWS 500… waiting for me to fire it up again.

      • bcronce
      • 7 years ago

      They banked on 16bit graphics and got owned in the benchmarks when 32bit became standard.

      I do miss my Voodoo2 12MB with a Stealth cooler.

      • albundy
      • 7 years ago

      yeah, but I’m not the one who’s so far away.

      • jokinin
      • 7 years ago

      They could also revie the intel 740 AGP, but that was so much slower than the voodoo…

        • WaltC
        • 7 years ago

        That’s because the i74/5x series by Intel was deliberately designed to make heavy use of AGP texturing. (I owned two of them for a short period before returning them for lack of performance.) The Intel cards came with anywhere from 4 megs to 8 megs of on board ram, IIRC, depending on brand and model. This was at a time when 3dfx and nVidia were shipping 16-meg cards routinely. Intel vainly tried to prove the point that AGP texturing was the way to go for 3d rendering–remember that the Accelerated Graphics Port was an Intel standard and Intel was in the business of selling AGP core logic & motherboards & supporting cpus, and since videoram cost ~$50 per meg but system ram could be had for about half of that, the whole point of AGP texturing was that you could substitute the much cheaper system ram for videoram costing double. Intel was certainly right about the economics at the time, but erred badly on its performance estimates. With 16-megs of on-board ram, the 3dfx and nVidia cards ate the i7xx cards for breakfast–the performance disparities were embarrassing for Intel.

        *An interesting footnote to that time in 3d-graphics history for those who might find it so…. 3dfx had always been honest about AGP texturing and how much faster texturing out of local ram was than texturing out of the much slower system ram across the AGP bus. 3dfx flatly stated on many occasions that its products did not support AGP texturing because of the performance problems it entailed. However, some sites like Sharky Extreme and AnandTech at the time got carried away with Intel’s marketing propaganda about AGP texturing and were faulting 3dfx for not supporting it because it was by their lights “the next level in 3d performance.” Of course, the poor, miserable performance of the i7xx products by Intel apparently did not cross any minds at Sharky’s or Anand’s…! Heh–it went completely over their heads! But here’s the amusing part of all of it…

        Seeing that Anand and Sharky were enraptured by the idea of AGP texturing as a performance feature, and were willing to ding 3dfx for not supporting it, nVidia started advertising that its 16-meg products *also* fully supported AGP texturing–just like the horrid i7xx Intel gpus! Test after test, however, proved conclusively that nVidia’s 16-meg cards were in fact texturing from their much faster pool of local ram, just like the 3dfx cards had done all along. (Wasn’t the first or last time nVidia would be less than honest in its advertising.) Both companies, 3dfx and nVidia, however, decimated the performance of Intel’s AGP-texturing dependent products to such an extent that Intel was non-competitive and dropped out of the discrete 3d gpu market completely. That appears to remain the case for Intel to this very day–IGPs are in, but it doesn’t appear that Intel will ever do another discrete gpu product family. I think that’s too bad, but there it is.

          • l33t-g4m3r
          • 7 years ago

          3dfx still should have given users an option to use agp, aside from the v5 5500 which had a hardware limitation from the dual gpus, and needed a bridge chip that ended up in the 6000.

          I agree with you that agp was overrated when it first came out, but it did serve a purpose. Local memory was faster, but this was the reason why 3dfx couldn’t use hi-res textures. Open source drivers have removed most of the software limitations of the cards, and now you can now run Quake3 in hi-res with max textures on a single voodoo2 and get a playable framerate via mesagl. Of course, this only works well on boards with high speed interconnects / DDR.

          3dfx was using 16-bit textures, but they were rendering the output in 22-bit. This wasn’t explained nearly well enough to people, as their “16-bit” rendering was much higher quality than nvidia’s 16-bit, and unless you were playing a game that used hi-def textures (rare), it wasn’t a problem. Not that you were admiring the scenery in quake anyway, and the few games that singularly supported 16-bit rendering looked better than the competition.

          • Airmantharp
          • 7 years ago

          i740 (and there was nothing else) only competed with Riva128/Voodoo Graphics. They had versions with 24MB of RAM, which was otherwise unheard of. And they had higher quality output, a la Matrox.

          By the time other brands’ cards reached that amount of memory, they had already been eclipsed.

          AGP texturing was a great idea- you know, if texture resolution is more important than performance- and it did work well, but not well enough to keep up with the performance race 3Dfx and Nvidia were in at the time.

      • auxy
      • 7 years ago

      In all seriousness, it’s kinda weird that nobody has brought back the brand name, at least. I don’t think Nvidia bought that, just their IP.

    • Sargent Duck
    • 7 years ago

    I only clicked on the article because it said “GRID 2”. I’m super excited about this game.

    • Damage
    • 7 years ago

    Added a bit to the story to clarify the purpose of the red pixels in the foliage screenshot.

    • axeman
    • 7 years ago

    This is the first I’ve heard of GRiD 2… ABOUT FREAKING TIME.

      • Farting Bob
      • 7 years ago

      I loved Grid. It was arcadey enough to just get straight into the action without all the often unnecessary fluff that most NFS titles had. Wasnt the most in depth racing game, but it still allowed skill to overcome challenges, not just googling the exact optimal setup for each car and track.

    • Firestarter
    • 7 years ago

    [quote<]The first of these additions, dubbed InstantAccess, allows the CPU to read and write memory locations controlled by the IGP. This sort of cross-pollination between CPU and IGP is something AMD has explored in its APU development, as well, and it makes sense for chips with both CPUs and GPUs onboard.[/quote<] Pure speculation of course, but if Microsoft might use an APU similar to the PS4 in the new Xbox, wouldn't that already be planned for whatever DirectX version comes with the Xbox and consequently be backported to DirectX for Windows?

      • Airmantharp
      • 7 years ago

      You’d think so on the outset, but what with consoles being beasts all their own, the likelihood of specific advancements being implemented on the PC is very low. While it may be AMD hardware running Microsoft code, the next Xbox will be neither a hardware equivalent to other AMD hardware, nor will the software be Windows with desktop DirectX.

      And if you think about it- these instructions Intel is touting are ‘cool’ and deserve mention, but seriously, will they be broadly used? These aren’t like TressFX where it will actually work on other people’s stuff. Something similar might be done on AMD’s APUs, but that wouldn’t implicitly apply to their discrete cards.

        • cynan
        • 7 years ago

        [quote<]...the next Xbox will be neither a hardware equivalent to other AMD hardware, nor will the software be Windows with desktop DirectX.[/quote<] How do you know? While I'm sure the OS won't be a carbon copy of Windows Blue, it could be a stripped version with a media-player type skin and gui. And why wouldn't they not use something that is tantamount to "desktop directX"? Just because they haven't in the past...

          • Airmantharp
          • 7 years ago

          Because Windows environments are constrained by being Windows environments- breaking compatibility hurts, and they have to do it slowly. Consoles are immune to this, so why would Microsoft burden and Xbox with a full-fat OS kernal and APIs?

          Further, the hardware being sold to Sony and Microsoft will likely be exclusive to those platforms (they may even own the IP), and may very well include functions and features not implemented in desktop products, like the 360’s ‘free’ anti-aliasing due to a small local buffer with dedicated logic.

          I agree that it will all be Windows- and DirectX-based, but there’s bound to be enough of a difference to make direct ports of technology nearly unfeasible. Another point of concern stems from the highly threaded yet low IPC nature of the new CPUs. It may become a challenge for systems with lower physical core counts (i3’s, and mobile dual-cores) to play ports without some re-engineering of the workload distribution.

    • puppetworx
    • 7 years ago

    With the resources Intel has you’d have thought they’d have the egg cracked by now.

      • My Johnson
      • 7 years ago

      You are correct to point that out. It’s why I flipped out of the stock. Cash is nice but you can’t make much off of it if you ain’t spending it on investment. What it gets at is that Intel is not being creative enough with business outside of CPU’s.

    • LoneWolf15
    • 7 years ago

    And it will be called…The Intel Starfighter II !!!

    [url<]http://en.wikipedia.org/wiki/Intel740[/url<]

      • Duck
      • 7 years ago

      Intel 750 Ti Boost

      • ludi
      • 7 years ago

      GMTA. Either that, or my age is starting to show.

      • Grigory
      • 7 years ago

      Two words: Death Blossom

        • Krogoth
        • 7 years ago

        [url<]http://www.youtube.com/watch?v=6U7rOUSvYM8[/url<]

          • Celess
          • 7 years ago

          [url<]https://techreport.com/r.x/llano/ansio-pattern-intel.png[/url<]

      • willmore
      • 7 years ago

      I was cleaning the basement yesterday and found mine! I put it next to an S3 and a Via card so that it wouldn’t feel so lonely.

    • lilbuddhaman
    • 7 years ago

    Fighting their way to third place, and they won’t go any further trying to get devs to use yet another set of proprietary hooks… Can’t wait to see what S3 has up their non existent sleeves in retaliation.

      • willmore
      • 7 years ago

      Via supposedly has some secret Linux drivers. Maybe they’ll try to use the Stream Box as a beachhead? </humor>

    • MadManOriginal
    • 7 years ago

    I hope the open source QuickSync improvements are implemented for CPUs that are currently out and have QuickSync.

      • Damage
      • 7 years ago

      Yeah, I didn’t say they weren’t. Not sure how this works with the licensing changes.

        • MadManOriginal
        • 7 years ago

        It was this line, which you reported that Intel said, which made me think of this: “so it announced at this year’s CES that it would be altering them for [u<]fourth-gen Core processors[/u<]" I understand that is about the licensing terms, but if said licensing terms only apply to Haswell it could leave older QuickSync CPUs out? It would be unfortunate since the hardware is similar enough as far as I know with HD 2000 and 3000 being mostly updates and tweaks. Or maybe it's up to the software guys to figure out something if there are differences with older QuickSync CPUs. If you could clarify whether the license terms will apply to QuickSync for all CPUs that have it, that would be great! (Although maybe Intel doesn't like to talk about older CPUs getting 'new' features.)

          • Airmantharp
          • 7 years ago

          I’d like to hear about this too- QuickSync has the potential to revolutionize how we look at handling video, if only real software programs could make use of it.

      • Deanjo
      • 7 years ago

      Actually with the release of the ivy bridge documentation, it has been possible to use Quicksync in a opensource manner for a while at least in Linux. It’s their Windows implementation that has the opensource unfriendly issues.

      [url<]https://01.org/linuxgraphics/documentation/2012-intel-core-processor-family[/url<] More specifically [url<]https://01.org/linuxgraphics/sites/default/files/documentation/ivb_ihd_os_vol2_part3.pdf[/url<]

        • Mr. Eco
        • 7 years ago

        You were downvoted for making claims and supporting them with sources. Upvoted you to fix the wrong-doing.

          • Deanjo
          • 7 years ago

          Ya some people will get their panties all in a knot when linux one ups windows. For them I would suggest to read the following thread.

          [url<]http://software.intel.com/en-us/forums/topic/311823[/url<] Right near the end it shows that it is a windows issue which depends on intels QS middleware. In linux this is not needed and it has been supported with vaapi for a while in linux. [quote<]I also want to ask: 1. Is MFX, the low level interface, accessible under Windows? Petter Larsson (Intel) Wed, 06/27/2012 - 16:59 Hi Michael, See answers below. 1. No [/quote<] It's one of the reasons I wish Damage would stop claiming quicksync wasn't opensource friendly because it is and fully documented in creative commons. It is the [b<] windows QS SDK [/b<]licensing that isn't opensource friendly.

        • NeelyCam
        • 7 years ago

        Looks super easy. Almost as easy as juggling 15 balls at the same time.

        You do know that it’s [i<]possible[/i<] to design your OWN custom ARM-based chip, have it support your own custom PCIeG4-like link to your own custom 1kW GPU, and it'll [i<]destroy[/i<] any QuickSync Intel could come up with in the next five years or so. Or, if you prefer a software route, you could just write your own Windows 9 that's 20x better than any windows before it. Maybe it comes with tinting, free of charge

          • Deanjo
          • 7 years ago

          [quote<]You do know that it's possible to design your OWN custom ARM-based chip, have it support your own custom PCIeG4-like link to your own custom 1kW GPU, and it'll destroy any QuickSync Intel could come up with in the next five years or so.[/quote<] No need to design your own when many ARM solutions out there right now are perfectly capable of doing this right now with their built in hardware encoders. Even a Pi can do that.

            • NeelyCam
            • 7 years ago

            [quote<]Even a Pi can do that.[/quote<] Do what? Support "PCIeG4"? Have enough horsepower to support a 1kW GPU? Moreover, you decided to intentionally ignore the point.

            • Deanjo
            • 7 years ago

            You had a point? Because all I saw was drivel written from someone that doesn’t know what occurs outside the limitations of x86 windows.

            • NeelyCam
            • 7 years ago

            I was talking about user friendliness which you, once again, proved is a polar opposite of anything Linux. The funniest thing is that you don’t even see it – because you can navigate through the arcane “arts” of Linux, you miss the point that 99% of people can’t.

            I sort of hoped that the retirement would bring you back to the real world, but I guess I was wrong

            • Deanjo
            • 7 years ago

            Do you honestly think “user-friendlyness” has ANYTHING to do about the gripe about the SDK’s licensing terms? And FYI, for a coder, that documentation is EXTREMELY user friendly (the user being the coder) with it being open instead of having to deal with the obfuscation of a closed system like windows to get hardware level access.

    • derFunkenstein
    • 7 years ago

    Goofy marketing lingo should have been the SECOND thing that Intel copied from its competitors. Or something (hey I’m new)

      • NeelyCam
      • 7 years ago

      [quote<](hey I'm new)[/quote<] You lie!

        • derFunkenstein
        • 7 years ago

        does it show? I really tried to cover that one up.

          • NeelyCam
          • 7 years ago

          Lips were moving

    • shaq_mobile
    • 7 years ago

    AMD AND Intel serious about gaming?

    What could this mean?

      • NeelyCam
      • 7 years ago

      That NVidia wins again…?

      • Narishma
      • 7 years ago

      It means more choice. We’ll now have games that run like crap on AMD and Nvidia hardware, games that run like crap on Intel and AMD hardware, and games that run like crap on Intel and Nvidia hardware.

        • tipoo
        • 7 years ago

        Is that name from WoT?

          • auxy
          • 7 years ago

          [b<]Narasimha[/b<] is an avatar of Vishnu in Hindu mythology.

            • tipoo
            • 7 years ago

            I’m of Indian descent, I know 😛
            But his name is Narishma, which is a character in the Wheel of Time series. Robert Jordan no doubt got all the names from various mythologies.

          • Narishma
          • 7 years ago

          Yes.

Pin It on Pinterest

Share This