Intel graphics drivers employ questionable 3DMark Vantage optimizations

In the early days of GPUs, application-specification performance optimizations in graphics drivers were viewed by many as cheating. Accusations were hurled with regularity, and in some cases, there was real cheating going on. Some optimizations surreptitiously degraded image quality in order to boost performance, which obviously isn’t kosher. Optimizations that don’t affect an application’s image quality are harder to condemn, though, especially if you’re talking about games. If a driver can offer users smoother gameplay without any ill effects, why shouldn’t it be allowed?

The situation gets more complicated when one considers optimizations that specifically target benchmarks. Synthetic tests don’t have user experiences to improve, just arbitrary scores to inflate. Yet the higher scores achieved through benchmark-specific optimizations could influence a PC maker’s choice of graphics solution or help determine the pricing of a graphics card.

Futuremark’s popular 3DMark benchmark has been the target of several questionable optimizations over the years. Given that history, it’s not surprising that the company has strict guidelines for the graphics drivers it approves for use with 3DMark Vantage. These guidelines, which can be viewed here (PDF), explicitly forbid optimizations that specifically target the 3DMark Vantage executable. Here’s an excerpt:

With the exception of configuring the correct rendering mode on multi-GPU systems, it is prohibited for the driver to detect the launch of 3DMark Vantage executable and to alter, replace or override any quality parameters or parts of the benchmark workload based on the detection. Optimizations in the driver that utilize empirical data of 3DMark Vantage workloads are prohibited.

No ambiguity there, then: Vantage-specific optimizations aren’t allowed.

Intel may not be playing fair, though. We recently learned AMD has notified Futuremark that Intel’s 15.15.4.1872 Graphics Media Accelerator drivers for Windows 7 incorporate performance optimizations that specifically target the benchmark, so we decided to investigate.

We tested 3DMark Vantage 1.0.1 with these drivers on a G41 Express-based Gigabyte GA-G41M-ES2H motherboard running the Windows 7 x64 release-to-manufacturing build, a Core 2 Duo E6300, 4GB of DDR2-800 memory, and a Raptor WD1500ADFD hard drive.

We first ran the benchmark normally. Then, we renamed the 3DMark executable from “3DMarkVantage.exe” to “3DMarkVintage.exe”. And—wouldn’t you know it?—there was a substantial performance difference between the two.

Our system’s overall score climbs by 37% when the graphics driver knows it’s running Vantage. That’s not all. Check out the CPU and GPU components of the overall score:

The GPU score jumps by a whopping 46% thanks to Intel’s apparent Vantage optimization. At the same time, the CPU score falls by nearly 10%. Curious.

Next, we ran a perfmon log of CPU utilization during each of 3DMark’s CPU and GPU component tests. Vantage takes its sweet time loading each test, so our start and end times aren’t perfectly aligned for each run. However, the pattern is pretty obvious.

In the GPU tests, the system’s CPU utilization is much higher with the default executable than with the “3DMarkVintage” executable. There isn’t much difference in CPU utilization in the CPU tests, though.

What’s really going on

Intel appears to be offloading some of the work associated with the GPU tests onto the CPU in order to improve 3DMark scores. When asked for comment, Intel replied with the following:

We have engineered intelligence into our 4 series graphics driver such that when a workload saturates graphics engine with pixel and vertex processing, the CPU can assist with DX10 geometry processing to enhance overall performance. 3DMarkVantage is one of those workloads, as are Call of Juarez, Crysis, Lost Planet: Extreme Conditions, and Company of Heroes. We have used similar techniques with DX9 in previous products and drivers. The benefit to users is optimized performance based on best use of the hardware available in the system. Our driver is currently in the certification process with Futuremark and we fully expect it will pass their certification as did our previous DX9 drivers.

This CPU-assisted vertex processing doesn’t appear to affect Vantage’s image quality. However, Intel is definitely detecting 3DMark Vantage and changing the behavior of its drivers in order to improve performance, which would appear to be a direct contravention of Futuremark’s guidelines.

At present, Intel’s 15.15.4.1872 graphics drivers don’t appear on Futuremark’s approved driver list for 3DMark Vantage. None of the company’s Windows 7 drivers do. The 7.15.10.1554 Windows Vista x64 drivers that are on the approved list don’t appear to include the optimization in question, because there’s no change in performance if you rename the Vantage executable when using those drivers.

Violating Futuremark’s driver optimization guidelines is one thing, but Intel also claims it’s offloading vertex processing to enhance performance in games. Indeed, the very same INF file that singles out 3DMarkVantage.exe also names other executables.

HKR,, ~3DMarkVantage.exe, %REG_DWORD%, 2
HKR,, ~3DMarkVantageCmd.exe, %REG_DWORD%, 2
HKR,, ~CoJ_DX10.exe, %REG_DWORD%, 2
HKR,, ~Crysis.exe, %REG_DWORD%, 2
HKR,, ~RelicCoH.exe, %REG_DWORD%, 2
HKR,, ~UAWEA.exe, %REG_DWORD%, 2

One of the games on the list for detection, Crysis Warhead, should have no problem saturating an integrated graphics chipset, to say the least. We tested it with the executable under its original name and then renamed to Crisis.exe, using FRAPS to collect real-world frame rate data with the game running at 800×600 and minimum detail levels.

Intel’s software-based vertex processing scheme improves in-game frame rates by nearly 50% when Crysis.exe is detected, at least in the first level of the game we used for testing. However, even 15 FPS is a long way from what we’d consider a playable frame rate. The game doesn’t exactly look like Crysis Warhead when running at such low detail levels, either.

Our Warhead results do prove that Intel’s optimization can improve performance in actual games, though—if only in this game and perhaps the handful of others identified in the driver INF file.

How do Intel’s driver optimizations affect the competitive landscape? To find out, we assembled an AMD 785G-based system that’s pretty comparable to the G41 rig we used for testing: Athlon II X2 250 processor, Gigabyte GA-MA785GPMT-UD2H motherboard, the same Raptor hard drive, and 4GB of DDR3 memory running at 800MHz with looser timings than the Intel system. We even disabled the board’s dedicated sideport graphics memory, forcing the GPU to share system RAM like the G41.

With the Futuremark-approved Catalyst 9.9 drivers, the AMD 785G-based system scored 2161 in 3DMark Vantage—nearly the same score as the 2132 3DMarks the G41 gets when it’s playing by the rules, but well below the 2931 the score the G41 posts with optimizations enabled. (Renaming the Vantage executable on the AMD system had no notable effect on benchmark scores.) The app-specific optimization gives the G41 a definitive lead in 3DMark Vantage.

Here’s the tricky part: the very same 785G system managed 30 frames per second in Crysis: Warhead, which is twice the frame rate of the G41 with all its vertex offloading mojo in action. The G41’s new-found dominance in 3DMark doesn’t translate to superior gaming performance, even in this game targeted by the same optimization.

That’s no great shock, really. We’ve seen Intel’s integrated graphics solutions thoroughly outclassed by rivals from AMD and Nvidia on multiple occasions.

All of which brings us back to the perils of using 3DMark Vantage as a substitute or proxy for testing real games. Those perils are well established by now. PC makers and others in positions of influence would do well to re-train their focus on real applications—especially for testing integrated graphics solutions, which have no need of advanced graphics workloads based on the latest version of DirectX to push their limits. 3DMark’s traditionally purported role as a predictor of future game workloads makes little sense in the context of these modest IGPs.

We’re curious to see what FutureMark will make of Intel’s Windows 7 graphics drivers. As far as we can tell, the latest GMA drivers are in violation its rules. We’ve asked Futuremark for comment on this issue, but the company has yet to reply.

Comments closed
    • Bensam123
    • 10 years ago

    I don’t see a problem here either. As long as they aren’t specifically degrading the quality of the benchmark to gain a edge. Also as someone else said, if they do this for other games thats awesome.

    I wholly encourage full utilization of the system for improved overall performance. If AMD/Nvidia are too proud to offload precious GPU data to non-GPUs, let alone other companies GPUs, so be it. This is a feature I’ve been looking forward to for years and the recent PhysX lockout shows more of a reason why it hasn’t happened then anything.

    I’m glad Intel isn’t afraid to use all their guns in their arsenal.

      • travbrad
      • 10 years ago

      I agree it would be nice if it improved the performance of a lot of games. However, it only improves the performance of a select few (commonly benchmarked) games, and those games still remain unplayable. So you get a slightly less stuttery slide show, and no improvement to games that are actually playable. Yay?

    • flip-mode
    • 10 years ago

    I don’t see much of a problem here. The only problem I see is if this is the only application the Intel is optimizing for – just to get a benchmark bump. Intel needs to optimize for games too.

      • Meadows
      • 10 years ago

      A lot of poor gamers probably appreciate the inclusion of WoW speed bumps. That game needs a half-competent GPU nowadays.

    • maroon1
    • 10 years ago

    What Intel did is not cheating becuase

    1-Those optimizations boost performance in gaming, not only n 3dmark

    2-Who cares about 3dmark, anyway ? The true benchmarks are when they actually run in games;the number of fps and the graphical quality.

    • travbrad
    • 10 years ago

    Yet another reason to never run 3dmark benchmarks. They mean absolutely nothing. When a card can score 150% it’s competitors yet only 50% in a real game, it truly is a worthless benchmark. It costs $20 for the “pro” version too, which reviewers could use to buy a REAL GAME to test.

    I don’t think there is anything wrong with offloading some work to the CPU, but doing it only for the most bench-marked games is very fishy indeed. As many people have mentioned, these games are still unplayable anyway. It’s the older/less-intensive games that would benefit greatly from this, but those games don’t show up in hardware reviews.

    Screw actually giving your customers a good experience, all that matters is suckering them into buying the product. The sad thing is, it’s a good business model too. Most of the people getting these are probably less knowledgeable about hardware, so they’ll upgrade their computer sooner (because “it’s too slow for this game”, buying more Intel product.

    Okay, that’s enough Intel hate for one post 🙂

    • Ashbringer
    • 10 years ago

    This is why I think that a good benchmark is with something like PCSX2.
    §[<http://pcsx2.net/<]§ It supports 64-bit, multi-core support, supports DX9/DX10 and even DX11. It's constantly updated and should prevent any tweaking for video card drivers. Plus, you can choose from a wide selection of games to benchmark with. Plus, this program can really stress your CPU, even if it's Quad core.

      • derFunkenstein
      • 10 years ago

      It does not stress quad cores. Read the documentation – it cannot use more than 2 cores.

        • Meadows
        • 10 years ago

        Exactly, that’s an upcoming update I’m looking forward to.

    • quock
    • 10 years ago

    I’d like to see the scores when you rename 3DMarkVantage.exe to crysis.exe or any of the other games that are mentioned in the INF file.

    • flashbacck
    • 10 years ago

    Sad that even with these optimizations, no one in their right mind would pick Intel graphics if they had a choice.

      • Usacomp2k3
      • 10 years ago

      For a non-gaming machine, I’d pick Intel any day. I’ve found them to just work more reliable in laptops, especially in the business environment.

        • homerdog
        • 10 years ago

        The GMA 950 in my laptop refuses to output a 1920×1080 signal to my 1080p LCD while both my desktops with NVIDIA cards have no issue with this whatsoever. No way I’ll buy another laptop with Intel graphics.

          • bhtooefr
          • 10 years ago

          By “refuses,” what have you tried? Is it just not listed as an available mode?

          If that’s the case, there are various ways to add the mode to the registry. Or PowerStrip can do it for you.

    • Delphis
    • 10 years ago

    How hard would it be to have 3DMark copy the executable it’s going to run to something random, then execute it? The process name wouldn’t be able to be ‘targeted’ like this then.

    It’s crappy Intel would do something like that instead of simply enabling the feature for any load. Just make it a user option to enable it.

      • sreams
      • 10 years ago

      The .exe needs to be able to be targeted in the case of multi-GPU setups.

    • wingless
    • 10 years ago

    I once read an article about a piece of software that allowed the user to change the CPUID on an AMD or VIA CPU to “Genuine Intel”. With this “hack” loaded, software did in fact run faster with Intel “optimisations”. Info on this can be found at these two sites:

    §[<http://arstechnica.com/hardware/reviews/2008/07/atom-nano-review.ars/6<]§ §[<http://forums.amd.com/devforum/messageview.cfm?catid=203&threadid=95754<]§ *[

      • pogsnet
      • 10 years ago
      • Huggy_Bear
      • 10 years ago

      Are you guys so naive to believe that any company will invest time and resources into optimizing and validating its SW stack for its own HW will also do so for all its competitor’s HW as well??
      Personally, I do not see any problem w/ enabling app-specific optimizations as a temporary workaround until a true (read: app-agnostic) dynamic load-balancing heuristic can be implemented and as long as these optimizations do not take dirty shortcuts by e.g. reducing the rendering quality (faster but low-quality texture filtering comes to mind…).
      Finally, for the conspiracy-theorist out there, you need to get a reality check and look at what the whole industry has been doing for many years now… E.g. if you own an nVidia gfx card, simply launch the nVidia control panel and look into the “Manage 3D settings” tab… What-do-you-know!? App-specific performance optimization profiles (even shows 3DMarkVantage.exe, etc…).
      I rest my case.

        • zima
        • 10 years ago

        What? The sw optimisations are already there, all we expect is to use them on the basis of features reported by the CPU, not simply its name.

        If you think about it, it would be actually good for Intel if software was returning errors in such case. But it doesn’t. When tricked into running as on Intel it just utilises that different cpu better and runs faster.

    • Waco
    • 10 years ago

    It’d be interesting to see what kind of results you can get with a good quad core and this “hack” enabled for all executables. I wouldn’t be surprised if it turns the integrated GPU into something worthwhile for light gaming (like on a HTPC or something).

    • wingless
    • 10 years ago

    I wonder what would happen if ou tricked all these synthetic benchmarks Intel does so well in to thinking an AMD CPU was an Intel CPU, or making the software think an Intel CPU wasn’t an Intel. Would performance be different on the CPU side of things as well?

    I also find it interesting that Intel chose to put optimizations in the most popular games and synthetic tests reviewers benchmark with. I will no longer trust anything performance numbers coming from Intel.

      • Thorburn
      • 10 years ago

      Its quite hard to run an AMD processor with an Intel chipset, the optimisation is in the graphics driver, using the CPU to run some of the shader instructions to enhance the graphics performance if the CPU is under-utilised.

      As for refusing to accept optimised code – AMD/ATI, NVIDIA and Intel all work with developers to optimise software for their products, if you refuse to believe data from optimised titles you won’t have much to work with.

    • green
    • 10 years ago

    from the inf:

    [Enable3DContexts_CTG_AddSwSettings]

    HKR,, ~3DMark03.exe, %REG_DWORD%, 1
    HKR,, ~3DMark06.exe, %REG_DWORD%, 1
    HKR,, ~dreamfall.exe, %REG_DWORD%, 1
    HKR,, ~FEAR.exe, %REG_DWORD%, 1
    HKR,, ~FEARMP.exe, %REG_DWORD%, 1
    HKR,, ~HL2.exe, %REG_DWORD%, 1
    HKR,, ~LEGOIndy.exe, %REG_DWORD%, 1
    HKR,, ~RelicCOH.exe, %REG_DWORD%, 1
    HKR,, ~Sam2.exe, %REG_DWORD%, 1
    HKR,, ~SporeApp.exe, %REG_DWORD%, 1
    HKR,, ~witcher.exe, %REG_DWORD%, 1
    HKR,, ~Wow.exe, %REG_DWORD%, 1

    HKR,, ~3DMarkVantage.exe, %REG_DWORD%, 2
    HKR,, ~3DMarkVantageCmd.exe, %REG_DWORD%, 2
    HKR,, ~CoJ_DX10.exe, %REG_DWORD%, 2
    HKR,, ~Crysis.exe, %REG_DWORD%, 2
    HKR,, ~RelicCoH.exe, %REG_DWORD%, 2
    HKR,, ~UAWEA.exe, %REG_DWORD%, 2

    my guess would be dx9 offloads for first section and dx10 offloads for second

    • ptsant
    • 10 years ago

    People saying that it’s just “an optimization” should consider the fact that benchmarking is usually done on ultra-high-end CPUs, while integrated GPUs are usually paired in real life with much more modest processing power. Software processing in test conditions (=big cpu) will be more beneficial than in real life (=small cpu).

    On the other hand, for GPU bound test you expect your puny but honest Radeon 4670 to deliver the same kind of performance, whether you have an i7 975 or a small Core 2 Duo.

      • derFunkenstein
      • 10 years ago

      Do you actually read tech websites? Every integrated graphics benchmark on reputable websites (specifically TR and Anandtech) use lower-end CPUs for these benches. Including this one, which again you obviously didn’t read.

        • potatochobit
        • 10 years ago

        ‘Dell’ does not make their purchasing decisions based on the TR website reviews.I believe his point is Intel will further skew the appearance of how good their IGP is by allowing a high end processor most consumers will not purchase to offload the work onto resulting in numbers the average user will not obtain.

        Although, honestly, even if ATI IGPs were better I think most companies would still end up with intel chips as their staple. It’s not really about the benchmark numbers… it’s about the dollar numbers.

          • derFunkenstein
          • 10 years ago

          OK, honestly, TR is using the same CPUs (or rough equivalents of them) in the reviews. You’re just looking for a fight, so you’ll have to go elsewhere.

    • Thorburn
    • 10 years ago

    Would anyone be complaining if the optimisation was reversed – Swap the default behaviour to use the CPU off-load (which when for example you have a Core 2 Quad + G45 is a very sensible thing to do) and then have a list of titles NOT to use it for. Now you aren’t detecting 3DMark.

    People will gladly tell you that with low end graphics the CPU is definitely not the limiting factor for performance, so why not use it to enhance it?

    • Ashbringer
    • 10 years ago

    For those who weren’t alive when the Quake Quack thing happened.
    §[<http://www.firingsquad.com/hardware/radeonquack/default.asp<]§

      • MadManOriginal
      • 10 years ago

      I would be impressed if there are many or even any 8 year olds reading TR.

        • eitje
        • 10 years ago

        alive in the enthusiast sense. 😉

      • _Sigma
      • 10 years ago

      The ‘Quake Quack’ comments now make sense. Thanks for the link.

    • wira020
    • 10 years ago

    maybe less…

    • Meadows
    • 10 years ago

    This news article makes me laugh.

    • sledgehammer
    • 10 years ago

    intel are thieves, they are going to rot in hell

    • spuppy
    • 10 years ago

    In the end, Intel’s controller is incapable of producing playable framerates in today’s games anyway. So who really cares?

      • DrDillyBar
      • 10 years ago

      indeed

      • wira020
      • 10 years ago

      u call that playable?? LOL

        • Meadows
        • 10 years ago

        No, he didn’t.

          • Anonymous Coward
          • 10 years ago

          LOL

    • MadManOriginal
    • 10 years ago

    Although it wouldn’t change the ultimate conclusion using a G41 vs a 785G is kind of a stretch but I appreciate you had to use what was on hand.

    Targeting Vantage is the only really naughty bit here and that’s just because it goes against Futuremark’s rules. Targeting popular benchmark games by .exe just isn’t great because it smacks of benchmark inflation but at least it brings performance improvement to those titles. If it were more generalized, which quite frankly is what NV and ATi drivers do if not in such a simplistic .exe-targeting way, that would be best. I do consider offloading to use overall system resources to their utmost a smart thing which should be encouraged not poopoo’d. This isn’t quite ‘quack’ level yet because the drivers aren’t Futuremark approved yet.

    A nice little investigation nonetheless.

      • zima
      • 10 years ago

      A stretch? Why? They are in the same price range, that’s all that matters.

      The games they are targetting remain unplayable anyway. They ingore many older / less intensive graphically games that could benefit greatly.

        • MadManOriginal
        • 10 years ago

        Maybe you should go check prices before saying things like ‘they’re in the same price range.’ G41 boards are cheaper than 785G boards and are priced more like 760G boards. G45 vs 785G would have been the ‘right’ comparison although G45s do range up a little higher they start around the same.

        I do however agree with your second paragraph so that’s why it’s not a big deal for the sake of investigating whether it happens and how it’s implemented.

          • AMDguy
          • 10 years ago

          I think the clear result of Intel’s gaming 3DMark is that 3DMark should no longer be used for benchmarking as it no longer gives a reliable indication of performance in games.

          Just benchmark the games.

          • zima
          • 10 years ago

          I’m fully aware of the prices /[

            • MadManOriginal
            • 10 years ago

            Well that’s unfortunate that they aren’t the same price ratio where you are. I’m sure a lot of prices are off versus the USA just like other things.

            The second sentence was ” I agree with q[

            • zima
            • 10 years ago

            I don’t really think it’s academic. Integrated GFX is great for many older games (or even some of currently being released – say, Torchlight). How good integrated GFX you get, for the price, determines the point of threshold “I cannot play this game”.

            So on the surface what Intel does is good – they make their GFX better. *[

            • MadManOriginal
            • 10 years ago

            Well yes like most everyone else I agree that they should try to extend this to as man games as possible. But you’ve got to be talking about really old games at low settings for integrated graphics to perform well. Otherwise in many cases a sub-$40 used graphics card will blow it away.

    • redpriest
    • 10 years ago

    I think you guys are also missing an important factor here. Benchmarks, even synthetic ones like 3dMark Vantage are probably used by OEMs (note that I don’t know that 3dMark Vantage is used like this, but I know for sure other benchmarks that are synthetic *are*) for competing on bids in computer systems since they’re static and are supposed to represent an entire generation of computer games on that API. So having a higher score that makes you look good in it, probably translates to hundreds of millions of dollars of potential revenue. That’s why targetting it for optimization is shady and that’s why Futuremark probably forbids it (among other reasons)

    -My opinions are my own and not those of my employer.

    • odizzido
    • 10 years ago

    I know that intel is working on their own real GPU, so I can see why maybe this would matter……but who looks at intel graphics when they want to play games?

      • potatochobit
      • 10 years ago

      people who like 600$ laptops

        • odizzido
        • 10 years ago

        yeah, but if you are getting a 600 dollar laptop for gaming, you are doing it wrong.

          • bittermann
          • 10 years ago

          The average user doesn’t know that….

    • Welch
    • 10 years ago

    Sad… if this “Optimization” was intended to actual improve performance in games you would not see it targeting games by their process name. Instead the “Optimization” would kick in once it notices the need to offload some of this workload to the CPU. Its clearly a quick parlor trick by Intel to make their IGP look like its up to snuff with the rest of the REAL GPUs out there.

    Stick to CPU’s Intel and leave the gaming graphics to the real GPU manufacturers. Your IGP will be useful for video playback and the Sims at best.

    Just a shameful dirty trick.

    • Freon
    • 10 years ago

    Just to play devil’s advocate…

    I’m not sure Intel is really beholden to 3DMark’s wishes. That is, I’m not quite sure I’d rush off to say Intel is obligated to follow “Futuremark’s guidelines.”

    Doesn’t necessarily mean it can’t be see as dirty pool, but I’ve never been impressed with the idea of synthetic benchmarks. Sure, renaming the EXE is obvious, but I don’t see how they are limited to such unsavvy but equally worthless detection methods.

    At some point there is a line to cross on detecting workload and adjusting its dispatch, and detecting a very specific benchmark’s workload and using a careful and narrow-to-one configuration. Intel is clearly coping out cheaply here, while not providing a widespread optimization.

    But do you think AMD and NV don’t do similar things, just on a more savvy basis, and with at least some promise of widespread effect in games? It is an imperative of their business. I think the more a particular benchmark is used, whether it be 3DMark or a timedemo, it is more prone to being targeted. A good test includes a large and representative sample of popular engines and games, and is part of what consumers should look for in reviews.

    • Toadster
    • 10 years ago

    so what happens if you do this same test on any system?
    i.e. does an AMD system respond the same way when you rename the files?

    just curious on the test algorithm here…

      • jdaven
      • 10 years ago

      Read the article and your question becomes magically answered.

      Seriously read the article.

        • Toadster
        • 10 years ago

        ahh – I see now

        (Renaming the Vantage executable on the AMD system had no notable effect on benchmark scores.)

        Just was looking for more prominent data i.e. charts, etc…

    • UberGerbil
    • 10 years ago

    Offloading work to the CPU is a sensible and obvious strategy, particularly as the advent of multi-core processors means there’s often at least one core with excess capacity. DirectX itself will do this for the 10level9 feature, to enable DX10/11 capabilities on GPU hardware that doesn’t otherwise offer it.

    However, the crux of this particular controversy is well-summaraized by Geoff
    g[

    • jdaven
    • 10 years ago

    For all of you who don’t see a problem here, you are not thinking clearly. Remember Intel is just a company so don’t get all defensive because you are such a fanboi that you can’t see the forest through the trees.

    Just remember, Intel is a company and capable of doing some intentional wrong to fool customers and hurt competition. DO NOT defend them just because you want to justify your purchasing/allegiance decision. All companies are capable of such anti competitive practices and just because you buy from them doesn’t automatically excuse such practices.

    That being said and out of the way, here is why this is bad. Benchmarks from companies like FutureMark are designed to allow easy advertising of how well a product will perform with respect to other competition. It plainly states in the review that the new score with optimizations exceeds the score of AMD’s competing integrated video solution. However, the Intel solution is much slower than AMD in games as indicated by the Crysis scores. Intel knows they are slower, therefore, they are detecting the benchmark to increase performance.

    Now you might say well the customer can just look at real gaming benchmarks. Intel thought of this too and made the optimization work for other major games benchmarked by review sites and not just every game. Tell yourself this, why can’t Intel offload work to the CPU in every game and allow the user to check this in the device driver panel. They are only offloading in the most benchmarked games. Also, why can you just change the .exe name and lose the optimization? That doesn’t make sense.

    It’s simple to conclude that Intel is acting unethically when it comes to fair competition and industry protocols. I like Intel but I can readily recognize when they do something wrong.

      • crazybob
      • 10 years ago

      So, if people disagree with you, they aren’t thinking clearly? Way to score points, Mr. Cheney.

        • jdaven
        • 10 years ago

        Sorry…”I believe” that people aren’t thinking clearly.

        There that should be better.

        • tay
        • 10 years ago

        Way to miss the point he was making idiot.

      • Voldenuit
      • 10 years ago

      I personally don’t see this as unethical of intel, nor harmful to consumers (unlike, say, my stance on other GPU makers in recent posts).

      If an application is not saturating the CPU, and available CPU power can be used to improve GPU performance, what’s the harm in doing so? It’s not as if intel is deliberately degrading IQ (as nvidia did in the past replacing whole shader routines, or ATI did with Quack). With most games still not quad(or octa)-thread capable, a mixed hardware/software vertex approach seems like a sensible option to improve application performance.

      Of course, the most sensible option for the consumer is still to buy an ATI or nvidia GPU, whether discrete or integrated, if they’re at all interested in gaming.

        • jdaven
        • 10 years ago

        But why can you rename the .exe and you lose the optimization? Why does the optimization work for only a few games? What limitation is there to not just work with all processes that tax the gpu so that the cpu kicks in to help out? Why does it only work for those benchmarks that are commonly used by review websites? Why isn’t there the ability to turn this on/off in the control panel? Why is this optimization in violation of futuremark rules?

        If this worked similar to hyprid gpu setups or multiple graphics cards, of course there is nothing wrong with using the optimization. But it only works with benchmarks that are used to show performance over other competition and only when the .exe file isn’t renamed. Come on now. It’s so terribly obvious what Intel is doing here. We’ve been through this a million times in the past.

          • Voldenuit
          • 10 years ago

          Per-application optimization is something that the entire industry does. Obviously, intel is trying to make its GPUs look better than they really are. The caveat is that they really *are* performing better than they really are, since the CPU is helping out, so you are getting better performance out of it. And there is no IQ hit.

          True, intel is trying to inflate its performance figures by targeting commonly used benchmarks (let’s not even call them ‘applications’). And while the crysis user (who is playing on an IGP? @_@) may appreciate the extra FPS, I agree that Futuremark’s only ‘use’ is in convincing people to buy a given card, if that.

          As to why the optimizations don’t work for all applications, it could be that using a software renderer may not be faster in all cases, in which case their tweak/cheat/whatever-you-call-it might even make things slower. Or it could become a drain on the CPU in CPU-limited situations (eg, Supreme Commander or another CPU-taxing game).

          At the end of the day though, maybe I’m more sanguine about this issue because intel IGPs are irrelevant to gaming, and are going to stay that way, Larrabee be damned. :p

            • oldDummy
            • 10 years ago

            This should not suprise anyone who has been around the industry for any length of time.

            EDIT: Darn,messed up response; that shouldn’t suprise anyone either.
            😉

            • jdaven
            • 10 years ago

            I stopped reading your post after this line and I thought that’s good enough for me and I rest my case:

            “Obviously, intel is trying to make its GPUs look better than they really are.”

    • derFunkenstein
    • 10 years ago

    yeah i don’t really see a problem here. Well…I do, kinda. It’d be nice if they just tried this out on all workloads and made it a checkbox on the driver’s option screen, so that it’s something you can turn on/off as you like.

    • Philldoe
    • 10 years ago

    I honestly don’t see this as a big deal, Why?

    #1 – It dosn’t specificly target only the 3DMark exe, it also targets real games
    #2 – They sent this to 3DMark for approval. So I’m going to say 3DMark knows about the optimization and is considering it.
    #3 – It dosn’t kill image quality at all.

    That’s good enough for me. Now had this only been done on 3DMark, I’d throw up the ethical flag and boo intel until my face turned purple.

      • shank15217
      • 10 years ago

      Optimizing for a special case usually lowers performance for the general case, therefore per program optimization usually shows poor drivers to begin with.

      • eitje
      • 10 years ago

      q[<#1 - It dosn't specificly target only the 3DMark exe, it also targets real games<]q It DOES specifically target the 3DMark exe. This sentence would be more accurate: *[

    • ssidbroadcast
    • 10 years ago

    Actually, after doing some thinking I’ve decided to withhold judgement. Geoff, if you want to be fair you should probably whup out an hp dv2 and see if you get a similar effect from changing .exe names. /[

      • ssidbroadcast
      • 10 years ago

      Wow, really? No one else thinks Geoff should look at AMD??

        • anotherengineer
        • 10 years ago

        depends how busy he is with dates lol

          • ssidbroadcast
          • 10 years ago

          ?? Geoff is doing the review, not me. All that guy does is ride bike marathons and swim with goofy looking mp3 players. He doesn’t have time for girls.

        • derFunkenstein
        • 10 years ago

        l[

          • ssidbroadcast
          • 10 years ago

          I fail at life.

    • FubbHead
    • 10 years ago

    Oh, big surprise there.

    • asdsa
    • 10 years ago

    Typical shady tactics from Intel. But I liked the term “Vintage” you used. It depicts nicely the whole benchmark using old school DX10 API.

    • Auril4
    • 10 years ago

    Intel has been setting the standards in ethics violations for years in the market place. Why should the technical side be any different?

      • stmok
      • 10 years ago

      Its a corporation…Corporations don’t have ethics to begin with. Its all about doing something questionable (as long as it brings in the money), and getting away with it…Well, that is until they get caught and punished!
      => §[<http://ec.europa.eu/competition/sectors/ICT/intel.html<]§ ...Its the same situation here. Everyone in the enthusiasts/geek community knows Intel's IGP isn't much for 3D performance. To try to manipulate a benchmark in order to make it seen in better light is really pushing the BS meter. If Intel's Larrabee flops to this level, I'm switching to ATI/AMD solutions.

        • dpaus
        • 10 years ago

        l[<"Corporations don't have ethics to begin with" <]l - don't forget that a corporation, under the eyes of the law, is considered to be a person. So yes, corporations /[

    • Buzzard44
    • 10 years ago

    l[

      • FubbHead
      • 10 years ago

      Grammatical error? All I see is a missing word…

        • derFunkenstein
        • 10 years ago

        …which would be a grammar error.

          • FubbHead
          • 10 years ago

          To me, a grammatical error is an error in grammar actually used. This is just a typo.

            • derFunkenstein
            • 10 years ago

            a grammatical typo? 😉

      • Mr Bill
      • 10 years ago

      Well maybe the point is you can read the sentence faster if a few words are missing? LOL.

        • MadManOriginal
        • 10 years ago

        Ebonics – helping you read faster since 1973

        ?

          • Mr Bill
          • 10 years ago

          If reading were a benchmark, see.

            • derFunkenstein
            • 10 years ago

            NCLB ensures that reading is a benchmark, just not what you’re used to…

          • Mr Bill
          • 10 years ago

          Article detects fan type remove words if intel add words if amd.

    • anotherengineer
    • 10 years ago

    el oh el @ intel

    I can see adding optimizations to help out in games, but adding them for a benchmark that has rules against optimization = teh ghey

    At the end of the day I as always put no faith in synthetic benchmark numbers, and let the game fps rates speak for themselves.

    It would have been really funny though if they had a big quad and super optimizations on a G41 that pumped out numbers like a 280 or 4890, that would have gave me lolz

      • d0g_p00p
      • 10 years ago

      You might want to save your lolz then.

      That day will come in about 10 years with whatever is Intel’s current GMA or if Laughabee is still around. Only then will Intel have a graphics solution that can match the speed and quality of a GTX280 or HD4890.

    • bhassel
    • 10 years ago

    l[

      • data8504
      • 10 years ago

      I like this.

      • Convert
      • 10 years ago

      This is the first thing I thought of when reading the article.

      I don’t see a compelling argument to say that this is “wrong” simply because it is doing exe detection.

      In all prior cases of this type of thing the detection was used to substitute and cheat, in this case it appears that it simply takes better advantage of the hardware.

      • LaChupacabra
      • 10 years ago

      Agreed, as long as it’s not degrading or changing any of the maths used to render the scene then it’s a performance optimization and should be allowed. Hopefully everyone by now is smart enough to know that synthetic gaming benchmarks should only be used by overclockers, just like stopwatches should be used for drag racers.

      I think the funny thing here is that the performance boost was as dramatic as it was. I can’t imagine the kind of kick-in-the-nuts the egineers of this IGP must feel.

        • bhtooefr
        • 10 years ago

        Stopwatches can be used by racers on other tracks, too. Maybe dynamometers are a more accurate analogy – they provide a figure that people brag about, but means very little in the real world. 😉

    • ltcommander.data
    • 10 years ago

    I think this is more of a case where creative ideas are being limited by Futuremark’s definition of unfair optimization via targeting the executable.

    Previous Intel IGPs like the much maligned GMA 950 couldn’t process the full graphics pipeline in hardware and had to resort to CPU software processing for things like vertex shaders. As such, Intel had to develop a relatively well optimized software rendering pipeline. With the newer GMA X3000 and 4000 series, a full hardware pipeline was implemented, but there were some cases where Intel’s previous software pipeline was better optimized and actually faster than hardware acceleration in games, which is why Intel implemented a driver switch to switch between hardware and software rendering.

    Now it appears Intel is trying to combine both, by doing hardware and software rendering at the same time. This might actually make sense seeing that software seems to be lagging hardware in multicore support so there would be CPU cores free anyways. Technically, by Futuremark’s requirements, Intel is in the wrong, but in this case, Futuremark may want to change their definition to incorporate hybrid hardware/software processing rather than having Intel get rid of it.

      • Anonymous Coward
      • 10 years ago

      Intel needs to target all games with this clever driver, not just a very short list of high-profile benchmarks. If they did that, there would be nothing to complain about.

    • Freon
    • 10 years ago

    “All of which brings us back to the perils of using 3DMark Vantage as a substitute or proxy for testing real games. Those perils are well established by now.”

    Indeed. I’ve never been much a fan of synthetic benchmarks, other than maybe curiosity in some tests on new chipsets. Instead, I think the industry (gamers, chip vendors, etc) should push developers to include benchmarking tools in their games and that should be the basis of actual purchases.

    It’s still a bit of a crapshoot, with the whole “the way it was meant” campaign and how often devs get in bed with the graphics vendors. A few key integrations may make one chip look better when it really only affects those games. Which is why it is even more important to subtract 3DMark and add just one more real game, to increase diversity. At least sometimes a game can be representative for other games using the same engine (id Tech, UT, Source, etc).

      • shank15217
      • 10 years ago

      3D mark isn’t a synthetic benchmark, just like call of Juarez isn’t a synthetic benchmark. Synthetic benchmarks are better known as micro benchmarks. The whole point of 3D mark is not to be a micro-benchmark.

        • MadManOriginal
        • 10 years ago

        Non-gaming benchmark then? By micro benchmark do you mean something that tests low level functions? Because 3DMark does have some of those thrown in to the score.

          • shank15217
          • 10 years ago

          Yes it does, but its a suite of benchmarks trying to address performance of a graphics subsystem from many viewpoints so its unfair to say its a synthetic benchmark, especially when the game engine it uses for several tests is also used in a game.

            • Freon
            • 10 years ago

            I’d be far more impressed if they had id tech, Unreal tech, the Oblivion/FO3 engine (forgot the name), or something etc. but they don’t.

            I’d rather see reviews use just one more actual game benchmark in place of 3DMark.

            I’m mildly curious about some of the narrow low-level tests, like single and multi texture rates, but they are just that, curiosities that really don’t affect my purchase decisions. It is the game benchmarks that sway me, and I like to see as many of them as possible.

    • phez
    • 10 years ago

    … what about the image quality?

    • crazybob
    • 10 years ago

    Is there really a story here? Intel is showing that performance on actual workloads is improving, why not provide the same boost to the benchmark? Why should Intel hobble benchmark performance when game performance really does improve?

    Storm in a teacup, mate.

    Here’s what the article should be about: Why doesn’t Intel make a driver that detects that the graphics card is saturated and offloads to CPU, regardless of the name of the executable (rather than update the driver with executable names for the next intensive game). Replace this with a shorter, more elegant, title.

      • ironoutsider
      • 10 years ago

      Yeah, just read the article and totally agree. Intel should be smarter than that! Same reason I don’t expect larrabee to be better than most integrated chip sets. Not much of a story here.

      • Nitrodist
      • 10 years ago

      Err, so by gaming the 3dmark system, you’re saying that they’re doing what they should be doing? Great logic.

        • crazybob
        • 10 years ago

        How is it gaming the benchmark if actual games (Crysis shown in article) are run with the same (admittedly silly method of) optimization?

          • AlvinTheNerd
          • 10 years ago

          Because the 3DMarkVantage score is the same as a 785G system but the Crysis framerate is half even with optimizations.

          Thus Intel is optimizing for the benchmark not only in software but in hardware. In other words, working to make the benchmark as high as possible without actually putting in the transistors that make games work well.

          Its cheating the benchmark and using their reputation as a cheap way to boost their card.

          • zima
          • 10 years ago

          Notice also how Intel optimises in this way only few games…some of which are used often as bechmarks, all with pretty high requirements – even with optimisations they’re pretty bad on Intel GFX (but the score is noticeably higher…)

          It would be worth something and actually coule be applauded if those optimisations were applied for vast number of older or simply graphically less intensive games, making them run not “quite decent” but “rather good”.

        • MadManOriginal
        • 10 years ago

        Well it’s ‘cheating’ according to Futuremark’s rules for Futuremark benchmarks although this driver is not approved by them yet so getting *too* worked up by it is silly. But when it comes to real games what counts as ‘cheating’? Does increasing performance without negatively affecting image quality count as cheating? That’s really what most all driver optimizations do so if that’s the standard we’re going by those drivers from NV and ATi are cheating in real games too. Targeting Vantage is the only bad thing here and that’s because it’s against Futuremark’s rules, they probably will not approve the drivers for that very reason.

    • ssidbroadcast
    • 10 years ago

    Yeh okay, that /[

    • Krogoth
    • 10 years ago

    QUACK!

      • Ashbringer
      • 10 years ago

      I mentioned Quake Quack from years ago, and it got deleted. Very strange.

        • Meadows
        • 10 years ago

        It didn’t, you posted that on the podcast entry.

Pin It on Pinterest

Share This