Valve’s Source engine goes multi-core

IF THE LAUNCH OF Intel’s new quad-core Core 2 Extreme QX6700 processor has made one thing clear, it’s that some applications are multithreaded, and others are not. Those that are can look forward to a healthy performance boost jumping to four cores, including near-linear scaling in some cases. Those that are not enjoy no such performance benefits, and may even run slower than on the fastest dual-core chips due to the slightly slower clock speeds of Intel’s first quad-core offering. Unfortunately, most of today’s game engines are among those applications that aren’t effectively multithreaded. A handful can take advantage of additional processor cores, but not in a manner that improves performance substantially.

With the megahertz era effectively over and processor makers adding cores rather than cranking up clock speeds, game developers looking to exploit the capabilities of current hardware are faced with a daunting challenge—”one of the most important issues to be solving as a game developer right now,” according to Valve software’s Gabe Newell. Valve has invested significant resources into optimizing its Source engine for multi-core systems, and doing so has opened up a whole new world of possibilities for its game designers.

You won’t have to wait for Half-Life 3 to enjoy the benefits of Valve’s multi-core efforts, though. Multi-core optimizations for Source will be included in the next engine update, which is due to become available via Steam before Half-Life 2: Episode 2 is released. Read on to see how Valve has implemented multithreading in its Source engine and developer tools, and how they perform on the latest dual- and quad-core processors from AMD and Intel.

Multiple approaches to multi-core
Unlike some types of applications, games strive for 100% CPU utilization to give players the best experience their hardware can provide. That’s easy enough with a single processor core, but more challenging when the number of cores is multiplied by two, and especially by four. Multithreading is needed to take advantage of extra processor cores, and Valve explored several approaches before settling on a strategy for the Source engine.

Perhaps the most obvious way to take advantage of multiple cores is to distribute in-game systems, such as physics, artificial intelligence, sound, and rendering, among available processors. This coarse threading approach plays well with existing game code, which is generally single-threaded, because it essentially just involves using multiple single threads.

Game code tends to be single-threaded because games are inherently serial applications—each in-game system depends on the output of other systems. Those dependencies create problems for coarse threading, though, because games tend to become bound by the slowest system. It may be possible to spread multiple systems across a number of processor cores, but performance often doesn’t scale in a linear fashion.

Valve initially experimented with coarse threading by splitting the Source engine’s client and server systems between a pair of processor cores. Client-side systems included the user interface, graphics simulation, and rendering, while server systems handled AI, physics, and game logic. Unfortunately, this approach didn’t yield anywhere close to a linear increase in performance. Valve found that its games spend 80% of their time rendering and only 20% simulating, resulting in an imbalance in the CPU utilization of each core. With standard single-player maps, coarse threading was only able to improve performance by about 20%. Doubling performance was possible, but only by using contrived maps designed to inflate physics and AI loads artificially.

In additional to failing to scale well, coarse threading also introduced an element of latency. Valve had to enable the networking component of the engine to keep the client and server systems synchronized, even with the single-player game. Looking forward, Valve also realized that coarse threading runs into problems when the number of cores exceeds the number of in-game systems. There are more than enough in-game systems to go around for today’s dual- and quad-core processors, of course, but with Intel’s 80-core “terascale” research processor hinting at things to come, coarse threading appears to have little long-term potential.

As an alternative to—and indeed the opposite of—coarse threading, Valve turned its attention to fine-grained threading. This approach breaks down problems into small, identical tasks that can be spread over multiple cores, making it considerably more complex than coarse threading. Operations executed in parallel must be completely orthogonal, and scaling gets tricky if the computational cost of each operation is variable.

Interestingly, Valve has already implemented fine-grained threading in a couple of its in-house development tools. Valve uses proprietary VVIS and VRAD applications to distribute the calculation of visibility and lighting for game levels across all the systems in its Bellevue headquarters. These apps have long taken advantage of distributed computing, much like Folding@Home, but are also well suited to fine-grained threading. Valve has seen close to linear scaling adapting the apps to take advantage of multiple cores, and has even delayed upgrading systems in its offices until it can order quad-core CPUs.


Valve’s, er, valve

The Prius method of game programming
Fine-grained threading may work well when it comes to visibility and lighting calculations for game levels, but Valve decided that it wasn’t the right approach for multithreading in the Source engine, in part because fine-grained threading tends to be bound by available memory bandwidth. Instead, Valve chose to implement something it calls hybrid threading, which takes an “appropriate tool for the job” approach. With hybrid threading, Valve created a framework that allows multiple threading models depending on what’s appropriate for the task at hand. In-game systems can be sent to individual cores with coarse threading, and calculations that lend themselves to parallel processing can be spread over multiple cores using fine-grained threading. Work can even be queued for processing by idle cores if the results aren’t needed right away.

Of course, Valve didn’t want its game programmers to have to become threading experts just to take advantage of hybrid threading. Game programmers should be solving game problems rather than threading problems, so a work management system was designed to address gaming problems in a way that’s intuitive for game programmers. This system supports all the elements of hybrid threading and focuses on keeping multiple cores as busy as possible.

Valve’s work management system features a main thread that uses a pool of N-1 worker threads where N is the number of processor cores available. Of course, multiple threads create problems for data sharing if parallel threads want to read and write the same data. Locks are traditionally used to prevent corruption when a thread tries to read data that’s currently being written or modified. However, locks force the read thread to wait, leading to idle CPU cycles that clash with Valve’s desire to keep all cores occupied at all times.

In an attempt to avoid core idling due to thread locking, Valve made extensive use of “lock-free” algorithms. These algorithms allow threads to progress regardless of the state of other threads, and have been put under the hood of all of Valve’s developer tools.

To illustrate the application of its new programming framework, Valve explained how it handles multithreaded access to the spatial partition, a data structure the represents every object in the world. The spatial partition is used any time something dynamic happens in the world, from movement to shooting. Obviously, you want to allow multiple threads to access the partition, but that becomes tricky if multiple write threads try to access it at the same time. Through profiling, Valve discovered that 95% of the threads that wanted to access the spatial partition were just reading, while only 5% were writing. Valve now allows multiple threads to read the partition at the same time, but only one thread can access it to write.

Valve was also able to apply multithreading to the Source engine’s renderer. Game engines must perform numerous tasks before even issuing draw calls, including building world and object lists, performing graphical simulations, updating animations, and computing shadows. These tasks are all CPU-bound, and must be calculated for every “view”, be it the player camera, surface reflections, or in-game security camera monitors. With hybrid threading, Valve is able to construct world and object lists for multiple views in parallel. Graphics simulations can be overlapped, and things like shadows and bone transformations for all characters in all views can be processed across multiple cores. Multiple draw threads can even be executed in parallel, and Valve has rewritten the graphics library that sits between its engine and the DirectX API to take advantage of multiple cores.

Performance improvements and new possibilities
Valve says hybrid threading is the most difficult approach to multithreading, but it scales well enough to be worth the investment. With dual-core processors, Valve sees an increase in frame rate as the main benefit to multithreading. However, there comes a point where increasing the frame rate begins to deliver diminishing returns. With quad-core systems, Valve is looking to provide gamers with new experiences rather than simply smoothing frame rates. Game elements like artificial intelligence, particle systems, and physics have traditionally been given fractions of a single CPU’s resources. Quad-core processors allow them to access considerably greater computational resources that programmers are more than eager to burn on smarter AI, richer visual simulations, and more realistic physics.

Artificial intelligence is a great candidate for hybrid threading because it’s tolerant in the sense that answers to questions aren’t necessarily needed right away. The game can wait a few fractions of a second for the answer to a “where’s cover?” question without adversely affecting gameplay, allowing some calculations to be queued to run on idle processor cores. There are also implications for what Valve calls out-of-band AI. This additional layer of artificial intelligence is separate from the core AI, but feeds information to it.


Valve rains down particles


Environmental particle interaction

Particle systems also lend themselves well to hybrid threading. Although they’re mostly a visual effect, particle systems actually tend not to be GPU-bound. They also tend not to interact with each other, making it possible to run independent particle systems on individual cores. In situations where there is only one particle system in the scene, that system can also be distributed across multiple cores. Having those extra cores available for particle processing allows Valve to create much more complex particle systems—ones that can interact with the world and even have gameplay implications.

Quantifying multi-core performance
To illustrate how multi-core processors can improve performance, Valve gave us a couple of benchmark applications. The first runs the VRAD lighting calculation tool on a Half-Life 2 map. This isn’t an end user application, but it shows how well multithreading can speed elements of the game development process, in this case a level build.

VRAD exhibits near-linear scaling, with the quad-core Core 2 Extreme QX6700 building the level nearly twice as quickly as the X6800, which runs at a higher clock speed. The Athlon 64 FX-62 is more than 35% slower than Intel’s fastest dual-core processor and not even close to the QX6700.

Valve also gave us a particle system benchmark that actually runs inside the Source engine. This test steps through a series of particle simulations, and according to Valve, it’s completely CPU-bound. Unlike VRAD, this test case is more typical of what an actual gamer might experience.

The QX6700 cleans up in the particle system benchmark, running nearly twice as fast as the dual-core X6800. Again, we see the FX-62 bring up the rear, although this time it’s only about 20% off the pace set by the X6800.

Conclusions
Valve makes a good case for its hybrid threading model, although it’s hard to argue against using the most appropriate threading approach for a given task. Creating a programming framework that allows that kind of flexibility was apparently very difficult, but in the end, Valve says it will enable games that competitors who don’t make the same investment in multithreading simply won’t be able to match. Hybrid threading has also proven to be an asset in the company’s work with Microsoft’s multi-core Xbox 360 console, and Valve says it sets them up nicely for what they believe is a “post-GPU” era looming over the horizon. Interestingly, though, Valve noted that its model isn’t particularly applicable to the PlayStation 3’s Cell processor.

Valve intends to roll out hybrid threading enhancements in the next major Source engine update, which will be released before Half-Life 2: Episode Two ships. Those enhancements won’t include the richer visual simulations, smarter AI, or more complex physics that are possible with multi-core processors, but dual- and quad-core systems should see a performance boost with Valve’s existing Source-engine games.

Of course, the more intriguing potential of Valve’s approach to multi-core gaming won’t be realized until its game designers start developing titles explicitly with multiple cores in mind. Work has already begun on more complex particle systems, realistic physics, and smarter AI, and Valve may even release a short level—similar to Lost Coast—to showcase how the Source engine can exploit quad-core processors. That release may be the first glimpse we get of how multi-core processors can fundamentally change gaming. For years, we’ve enjoyed how the rapid pace of graphics hardware development has enabled ever more compelling visuals. Yet while developers have been able to create games that look real, their behavior has been anything but. Multi-core processors may finally give artificial intelligence, physics, and other game elements a chance to catch up.

Comments closed
    • markbeeler
    • 12 years ago

    Where do i get this engine for testing my own system?

    by this i mean the vrad map-maker and the graphics tester

    URL please

    thanks!

    • MadManOriginal
    • 13 years ago

    Umm…aren’t games sequential because of the nature of what a game is? What happens next depends upon what is happening now, unless you guys mean something else by sequential.

    • somegeek
    • 13 years ago

    They’re doing it because of the Xbox 360 and PS3 versions:

    §[< http://xbox360.ign.com/articles/718/718515p1.html<]§

    • Vaughn
    • 13 years ago

    Boots can u confirm that CS S randomizes shots. Cause you’re the first person to ever mention that. And from your reply’s i’m gonna assume u don’t play CS source, correct me if i’m wrong. Do u have any links or proof to this statement?

      • BooTs
      • 13 years ago

      Have never bothered to play CS:S, but I did play a lot of CS. In CS and many new games with realistic weapons, your crosshairs expand once you start firing to indicate the area in which your bullet go is expanding. How does it determine where in this expanded box it goes? It randomizes. There isn’t a physics model or timed algorithm that would be useful to determine that so it is just randomized.

      The accuracy of your weapon in CS would increase when you crouched, and decrease when you were running.

      Is that clear enough? Basically in CS any shot that isn’t fired when the crosshair is extremely narrow has a long funnel of possible paths it will take. One is randomly selected whenever you shoot.

        • sigher
        • 13 years ago

        Don’t forget to mention the famous buggyness of valve’s randomization algorithms, which at one point they admitted to in the old CS and said they were ‘fixed’.

      • fatpipes
      • 13 years ago

      “maximum expected improvement to an overall system when only part of the system is improved”

      Add more cores, and sure you’ll see less return, that shows up in almost every major benchmark. That’s why you have change the software too. Multi-core doesn’t suck at all, SMP alone is quite creamy. Ahmdal’s law states that if I add cores the CPU, my software performance will see diminishing returns. But if I add cores to the CPU AND tune my software to take advantage of it, there’s no reason it can’t see near linear performance gains.

      Ahmdal’s Law never stopped any multi-million dollar IT budgets from buying more licenses on N-core servers, even though there are frequently 70% to 80% improvements on performance that decrease with the number of processors. Nothing sucks about that enough to discourage people from paying the cash.

      If you could parallelize every single aspect of a program, including all of the subsystems that get the program up and running and hooked into the hardware and OS, you would be accomplishing a phenomenal task and it would likely require a whole new kind of programming language. Nobody is really expecting that. They just want the best they can get. And bottlenecks will always surface that makes that kind of thing very difficult.

      There are programs that know how to make the best use of every CPU in your system, they spin off the worker threads and the master thread goes to sleep and it’s damn near 95% of every single CPU. That requires a ridiculous amount of optimization and the ideal scenario of having a highly parallelizeable algorithm.

      Video games are something that aren’t purely scientific and there are many many subsystems that really can’t or shouldn’t be parallelized, but there’s opportunity for parallelization and even if you see 50% improvement in your game from a dual-core, that’s 40-45% more improvement you would see over a simple frequency bump.

      I find it humorous that people tend to think most technologies exist for the end-user. We’re big dumb consumer cash cows to technology developers for the most part.

      64-bit? Not for us, probably won’t be for a long time. It was spun that way because it was a necessary development and execs thought they could cash in on the dimmer consumers. Dual-core? Not for us. Honestly, it’s just technology companies keeping up with the times, it’s technology companies getting around the limitations of their technologies. It’s spun to us as a big improvement, even though it’s realy not for most people. If you can appreciate what these technologies do for you, and you know that you can benefit from them… the technology is for you. Otherwise, don’t complain that it won’t do anything for you… not everying is about you.

        • ew
        • 13 years ago

        No, that is not what it says. If an algorithm can’t be 100% parallelized then you will always see diminishing returns. You can change the software all you want. There is still a ceiling.

          • fatpipes
          • 13 years ago

          Sorry, I edited the post quite significantly.

          I don’t see how this makes multi-core “suck.” There’s nothing that sucks about performance jumps orders of magnitude higher than frequency or cache bumps. There’s nothing that sucks about the software industry adapting to take advatange of this. There’s nothing that sucks about the industry jumping over its own hurtles to improve nevertheless, regardless of diminishing returns that are nowhere near as bad as what we were seeing with frequency bumps.

          • WaltC
          • 13 years ago

          I’m not sure I understand your problem here…;) Diminishing returns is a phenomenon relative to everything I can think of–uh, particularly, the speed of light, which helps to illustrate that physical laws have a way of creating diminishing returns. Just because we put headlights on a car and the light from those headlights travels at the same speed whether the car is sitting still or moving forward at 100mph, that’s no reason to complain that the speed of light is handicapped by a diminishing return, is it?

          Likewise, if an automobile has 200 horsepower but doesn’t accelerate precisely 2x as fast as an automobile with 100 horsepower, or a 4-cyclinder engine doesn’t get exactly 2x the gas mileage of an 8-cylinder, those discrepancies of less than 100% improvement are not grounds for rejecting 200 hp, 8-cylinder engines, are they?

          I mean, I’d be amazed and pleased to see 95% performance improvement between single and dual core, or dual and quad-core. The problem, of course, is that the cpu is but one component of the system, and depending on the operations involved the system throughput will inevitably become bottlenecked and limited by the speed of its slowest components, and in the case of dual/quad-core cpus /[

        • supercromp
        • 13 years ago

        FUCK creamy. please, no more cream with dual core talk. its nice, its great. theres no CREAM.

      • TheMatrixOne
      • 13 years ago

      I really don’t think that multi cores suck. Two are sweet for multitasking and other multi threaded software. Although I do think that they should not ramp up the cores for us gamers too much too soon. This is because It puts a lot of strain on, the developers from what I hear from them, and that leads to longer developing times and/ or higher prices for us.

      Another two reasons for not wanting them to get too high too fast is that processors will become worse than graphics cards (meaning dx) in terms of how soon gamers can take advantage of the greater power potential in games. Sure Valve seems to be doing it well and fa st but it really wont help us gamers till a game on the source engine uses the added capabilities to make the game better. The last is that heat and power usage seems to go up dramatically with doubling the cores in the first stages of their release.

      I’m defiantly getting a core2 this coming year (along with many other upgrades). I can’t wait!

      • sativa
      • 13 years ago

      so is that that why supercomputers with 1000 processors are barely faster than ones with 500 processors.

      [/sarcasm]

      • sativa
      • 13 years ago

      It isn’t necessarily about speeding up what already exists, but giving developers more freedom to create things that weren’t possible with single cores.

        • ew
        • 13 years ago

        The point is that no matter how much you change things tasks that can only be accomplished sequentially just don’t do well on multicore systems. Games are highly sequential. The kinds of applications run on super computers aren’t.

          • sativa
          • 13 years ago

          everyone has been taught that games are all sequential. this is because they have been forced to be sequential due to the past 30 years of computer hardware architecture.

          however, it doesn’t have to be that way.

          that is what these programming tools are all about: taking advantage of the quantity of transistors vs the speed at which the transistors switch

            • ew
            • 13 years ago

            See the example in my first post. It is strait from the article. Certain parts just have to be done sequentially.

            • murfn
            • 13 years ago

            Amdahl’s law states that there is a limit to the benefit you can get from spreading an algorithm over an ever increasing number of cores. The underlying assumption is that the algorithm is fixed. At the risk of repeating other POV’s, the algorithm is not fixed in game development as each game can be designed to use the extra cores to the fullest extent possible.

            This idea can be seen in graphics, where a more powerful GPU will provide diminishing returns on existing games. Eventually, the GPU will be so powerful that each frame will be CPU bottlenecked. However, each new game can be designed to render a more complex world, with longer shaders and more real effects.

            • sativa
            • 13 years ago

            yes but you can INCREASE the number of ‘parts’ with more cores. thats the whole point.

            • sigher
            • 13 years ago

            Games aren’t sequential at all, as this very article points out, games are about graphics and sound and AI, and graphics for starters are highly parallel, you see hundreds of objects and millions of pixels in a frame, so these objects have to all be simultaneously placed and been calculated for every frame, that just cries out for parallelism.

          • willyolio
          • 13 years ago

          games were made sequential because the hardware was for the past few decades, not the other way around.

            • sativa
            • 13 years ago

            is there an echo 🙂

      • d2brothe
      • 13 years ago

      Yes, I totally agree, what is the point of a 3.478 improvement, thats totally a waste of effort…</sarcasm> thats a rather stupid thing to say, everything has diminishing returns, thats why the megahertz war no longer matters, because improving CPU manufacturing process was having diminishing returns on processor speeds. Just because it has diminishing returns doesn’t mean it isn’t worth working towards it. Also, Amdahl’s law only refers to workloads that aren’t 100% parallelizable, work has only begun on parallelizing games, nobody knows how close we can get…I’d say 95% is a good start.

      • sigher
      • 13 years ago

      if 95% of a process can be parallelized then only 5% would be exempt from speedup, and that is a small amount, you start with separating the task you can parallelize, then you are left with a 5% (of original) basic taskspeed task and a rest that is a 100% speedable part, then you would no longer have the law apply to that part of the program.
      So yes the speedup would diminish and max out to close to 5% of the original if you had infinite cores, but for now we don’t have infinite cores, and using percentages is deceiving in this context.
      Even if it would only be useful until we had 20 cores, that’s a long way to go and I think we’ll have other game engines than source by that time and it’ll sure run a lot faster than 1 core

      • sigher
      • 13 years ago

      Oh and here’s another thing to consider, let’s say you play a game like CS and you meet an enemy and want to shoot him, but your single core CPU says ‘one sec, doing something else’ then you are dead and have to wait 10 minutes until the next round, that’s 10 times 60 seconds = 600 seconds at let’s say 60FPS is a 36000% reduction in gameusespeed caused by one less core ;P

    • Sniper
    • 13 years ago

    Wasn’t Valve bitching about how multithreading was “too hard” not too long ago?

    • sativa
    • 13 years ago

    they make just as much money, if not more, in licensing their game engine than actually selling copies of games.

    why do you think they keep harping on about how the developers will get ‘free’ multi-core enhancements without having to program for it!

    but yeah… that is the best way to do it. so more power to them 🙂

    • SGT Lindy
    • 13 years ago

    Honesty why at this point in time?

    Break it down in sales. How many copies will be sold to people with single core CPU’s vs multicore cpu’s? I bet if the game came our right now it would be something like 70/30 single vs dual.

    Also even if you make the changes will you be able to notice it while playing a game? If I had a fast single core CPU and a good or great GPU…would I notice a difference while playing the game?

    I guess if they make it for the 360 it will work….since they all have multi-core CPU’s:)

    • lemonhead
    • 13 years ago

    TR’s a little late to the party. Anandtech had their article up on 11/7

      • BooTs
      • 13 years ago

      Anandtech is covered in goofy adds and ugly design.

        • indeego
        • 13 years ago

        Yeah it’s a shame, because Anand [and Tom and Kyle] are certainly technically capable, but the articles from an aesthetic standpoint are difficult to peruse, and fairly dull to read. TR [and ars] at least provide some humor, Girlfriend shots, and depth that is interesting to the senses.

        I cannot read most of Anand’s reviews, they are snorefests, but I do like their benches and print viewg{<.<}g I recall being banned a while ago because they decided to drop support for yahoo mail accounts in their comments, and ever since then I visit far less.

          • BooTs
          • 13 years ago

          I miss Kyle Bennet. [H] is a spazzfest now. Ars is also one of my top picks. Clean layout, and well written articles.

            • indeego
            • 13 years ago

            Oh did he leave? I haven’t been to [H] in years, pretty much because of Kyleg{<.<}g

            • BooTs
            • 13 years ago

            I don’t think he left, but it used to be just him. Then that joke Steve started posting 100 times a day and I stopped going there.

      • Dposcorp
      • 13 years ago

      (Bart Simpson) If you love Anandtech so much why dont you marry them?

      • Sanctusx2
      • 13 years ago

      That’s why this sounds so familiar. Haha thanks, it was bugging me. I kept thinking I was experiencing deja vu or something; I could swear I’d read all this info awhile ago. Glad I wasn’t crazy, this time. 😛

      • TheMatrixOne
      • 13 years ago

      This is first and for most a hardware site I believe, so six days isn’t really all that bad for something such as this because it is something tat is just in the works and not out now anyway. This site is my fav for a bunch of reasons but mainly their cred and the way the reviews are made.

      By the way Lemon head is one cool member name. It takes me back to the old days. 🙂

      • Pizall
      • 13 years ago

      I love how they have the latest news up on the homepage. I just hit the homepage and read whatever is new. Once in a while when I get the urge to buy something I start looking at tons of benchmarks which are really nice but sometimes I will go to some of the other sites if I cannot find them all on TR.

    • Krogoth
    • 13 years ago

    Cool idea, but ulimately pointless to do with the current revision of Source engine. Any half-decent CPU/GPU can handle source-engine based games fine.

    IMO, Gabe and the other Valve programmers should focus their multi-threading efforts on their next generation engine al HL3.

      • murfn
      • 13 years ago

      They are trying to sell the Source engine. And they are trying to sell more HL2 and other current source games. That is the point.

      We get progress.

      • d2brothe
      • 13 years ago

      Not quite true, any half decent GPU/CPU combination can handle HL2 and other source games (depending on how you define half decent, mind you :P). The source engine, has greater capabilities than are made use of. Also, the source engine does a lot of performance scaling. For example, full physics are not turned on most of the time in HL2, with this enhancement, maybe they could be on a quad core machine. While most games can run fine, it is possible to write reasonable games that won’t run that well on average systems.

      • mikehodges2
      • 13 years ago

      “Valve is looking to provide gamers with new experiences rather than simply smoothing frame rates.”

      Apparently they’re one step ahead of you..

    • ratborg
    • 13 years ago

    All the cores in the world wont help the AI if there isn’t effort put into programming it. Games today on single-core CPUs have enough overhead that the AI should be better than “bad guy charge the player”. Unfortunately that’s all we get with most games. The reality is that good game AI code is hard and tweaks to underlying engine code don’t make it any easier to write.

      • Kharnellius
      • 13 years ago

      While I agree to some extent I think you are missing part of the point. Can someone make a killer AI? Yeah probably, but in order to make it very lifelike you need to dedicate much more processing power than what is currently alloted.

      Basically, Valve is saying this will allow more complex AI without huge performance hits and allow coders to make AI more realistic without worries of “Will this bring every customers computer to a screaching halt?”.

    • Dposcorp
    • 13 years ago

    I am wondering some things.

    Will we start getting benchmarks with 1,2, and 4 core CPUs?

    At this point in time, there are a lot of single and dual core CPUs, and 4 core is starting to pop up, so for the next year or so, it may be nice to seem some comparisons for certain configs.

    For instance, a dual core 3800+ X2 at 1.8Ghz versus a 3Ghz P4 w/HT versus a Quad Core Kenstsfield at 2Ghz.

    Just for those of us that may not be upgrading anytime soon.

      • Beomagi
      • 13 years ago

      you have 3dmark 0x.

      what more can you want for benchmarks other than real apps though? real games, real multimedia apps, real compression etc?

      • TheMatrixOne
      • 13 years ago

      I’m glad you brought that up. Just goes to show how developers make their problems seem like they’re much lager than they actually are (sometimes). I’m happy to see that that he realized that it was going to have to be done anyway so he might as well start work on it early.

    • Jive
    • 13 years ago

    This is probably why Episode 2 is delayed. Delayed for a good reason too.

      • d2brothe
      • 13 years ago

      Agreed….in yo face all u who were ragging on Valve before :P….

      Also, as an owner of a single core CPU, I am wondering what this will do to performance on that type of system. They mentioned the N-1 pool,so it sounds like it might not even run, but what about hyperthreading?

    • MadManOriginal
    • 13 years ago

    I’m especially curious to know how this will affect graphics cards and (blah) dedicated physics cards as physics co-processors. It was actually pretty neat to think that an old graphics card could be held on to and serve some purpose other than selling for half what you paid to fund an upgrade. I haven’t seen any mention by Vavle on this, I see the advantage of focuing on CPU multicore for physics but it would be nice if they didn’t just abandon the alternative route of graphics cards and dedicated cards.

    Geoff or Scott use your connections and hook us up with the info 🙂

      • murfn
      • 13 years ago

      I read a similar article issued by Valve. I cannot remember where I found the URL. According to this article, their decision to go with CPU physics is based on anticipated low adoption rates for PPU’s and physics on GPU’s.
      They do, however, foresee, wide adoption of multi-core CPU’s.

      The information from Valve at this point is ,IMO, a marketing drive and not a technology brief. They are telling potential developers that the Source engine is the leading game engine in the industry as far as multi-core support is concerned.

    • R2P2
    • 13 years ago

    Episode 2 /[

      • Nullvoid
      • 13 years ago

      That would be telling…

      • sigher
      • 13 years ago

      Doesn’t matter, valve is too unreliable to be counted on in either case, it will be out when it’s out.

    • evermore
    • 13 years ago

    What does the score in the Source particle benchmark represent? Just seeing the differences in the scores is somewhat informative, but knowing what they mean would be moreso. Right now it’s like answering “15” when someone asks how fast your computer is.

    I have a proposition that would obviously never be accepted but sounds neat. Distribute gaming processing across everybody in the same game on a server, or everybody on the server. If somebody’s got a blazing computer, some of their power would go to helping out people with slower machines so that everybody gets about the same experience. It would have to be stuff that really really really didn’t care about latency of course. And obviously the guy with the fast machine would be pissed, but it would be interesting to make it work.

      • mattsteg
      • 13 years ago

      That’s an absolutely awful idea and would never even come close to working. There’s flat-out not that much stuff that isn’t far too latency intensive, even without the pile of other problems.

        • BooTs
        • 13 years ago

        You could have just said, “Screw you, hippie!”

        But yeah, that’s entirely unfeasable.

    • tempeteduson
    • 13 years ago

    y[

      • VTOL
      • 13 years ago

      Same difference

        • sigher
        • 13 years ago

        Steam is the contentdelivery system and GUI, source is the half-life2 engine, they are not even remotely the same.

    • Bensam123
    • 13 years ago

    I thought the physics revolution was supposed to come around with PhysX?

    Realistic AI I already saw comming from that one company (forgot the name).

    Just makes me wonder if engine developers now have so much power at their disposal they’re looking to take on things they don’t really need to. Personally I think HL2 could use some better hit detection and net code before better physics and ai. Just like processer makers are going to try and take on the tasks of graphics as well as general processing.

    Perhaps this sort of thing shows that we aren’t thinking far enough into the future for the amount of resources we have at hand.

      • evermore
      • 13 years ago

      Hit detection doesn’t seem like something that’s particularly CPU bound by itself, it’d just be a sub-system of the physics engine. Quality of the hit detection is probably more about good coding and design than CPU speed. Your shot’s going to hit in the same place whether your machine is fast or slow, you just won’t see as many effects or get the same framerate. More CPU power might help if the reason they make it so bad is that making it better would be a quite large CPU hit somehow, but that seems unlikely.

        • Bensam123
        • 13 years ago

        Let me reword it a bit. If they have enough resources to devote them to expanding upon AI and physics, why couldn’t they spend a bit on making what they have better?

          • BooTs
          • 13 years ago

          That’s what they are doing. They are making it better, by expanding on it. The more CPU time you can dedicate to AI and Physics from a design budget means the more you can actually do. If you only get .002% CPU time dedicated to AI, you’re going to have some really stupid baddies.

          AI and Physics in current games are not limited by what can be programmed. They are limited by what can be programmed to run with minimal CPU usage.

            • Bensam123
            • 13 years ago

            So you’re saying that the time and resources they’re spending on expanding physics and AI can’t be spent else where?

            It’s like putting lots of a eye candy into a really crappy game. It looks really nice but the game still sucks.

            As I said before I would prefer good hit detection and better net code over expanded AI and physics. It seems especially redundant such as other companies are tackling such things completely rather then on a side note.

            Now if the independent solutions were inadequate they would have a case.

            Wonder why improved sound isn’t in there (which IMO is one of the things the source engine needs improved the most).

            • BooTs
            • 13 years ago

            Hit detection is pretty simple. There is room for improvement there, but who cares? You want to deal 5 less damage because you shot a guy in his left hand, and he doesn’t need that one anyway?

            Netcode is trickier, but I don’t think it has changed all that much since QW.

            Not much room for fantastic expansion of the game experience in either of those items.

            AI and Physics are definately the way to go.

            You complain about them putting too much eye candy in to a crappy game? Well AI is the bulk of the game experience. AI and Physics add immersion to the game.

            Anyway, to me, complaining about poor netcode and hit detection is something out of 1999 when someone couldn’t handle getting trounced in Quake.

            • Vaughn
            • 13 years ago

            You obviously don’t play CS Source in its current state. The hit detection and netcode are its biggest problems right now, and would be a far greater improvement to the gameplay, that adding dualcore support at the moment. And that example u gave is terrible. Its more like shooting a guy in the face with a shotgun and doing 0-10hp damage or none at all. Or killing someone clearly after they ran past a wall, the hitboxes also need major work in CS source.

            • BooTs
            • 13 years ago

            Sounds more like latency issues than design issues. Unless there are somehow false negatives when determining hits, I highly doubt there are any issues with hit detection. Hit detection is very, very simple, and takes place on the SERVER. Hitting people after they round a corner is latency (or possibly a result of bad game design, where you can shoot through walls. derp.) I assume the netcode in CS:S is atleast as advanced as QuakeWorld, and if it is, then any thing like shooting someone and they die after they run around a wall is just latency.

            Unless the netcode has devolved back to NetQuake era, and is not asynchronous with prediction, those issues aren’t things programming will fix.

            • Bensam123
            • 13 years ago

            As he said you obviously haven’t played CS. I’ve sat behind people with a gun (any gun) and at sometimes done zero damage to the back of their skull and then they whip around and shoot me in the face.

            Hit detection is what means you live and they die, its your score and for some of us it’s what really, really matter (5-10 damage matters). Although the chairs of inifinite death or the fileing cabinets of snare +25 are pretty bad (horrible, horrible physics), hit detection and lag are a more formidible enemy.

            Hardcore gamers will scream and rave at their computer screen when they die cause their bullets count. They won’t scream at their screen for not displaying as much eye candy.

            They should optimize and/or re-write select portions of what they have before taking on something new. IMO they’re doing this for face and braging rights, not because the engine needs it.

            If you want a good example of a highely optimized and very effecient engine you should look into the torque engine which powered the Tribes series. Best hit detection and net code I’ve ever seen. You can play on dial-up with absolutely no trouble and do decent.

            • BooTs
            • 13 years ago

            You do know that CS randomizes aim right? Maybe that’s news to you.

            I have never seen any hit detection glitches in QuakeWolrd, Q2, Q3, HL, CS, or any other game I’ve played. I’m pretty sure what you’re crying about is randomized aim and poor latency.

            Programming hit detection and netcode are fairly simple things. If some programmer could do it properly 4, or 10 years ago, why do you think it is being done wrong now? Do you think that the hit detection code from one game is some super secret formula? No. The industry as a whole advances together. Game makers who don’t keep up go out of business. I don’t understand why you would think things as simple as netcode and hit detection could be ‘forgotten’ by programmers. As if that knowledge was some how lost.

            Maybe you should try playing a game that doesn’t depend on randomized aiming to introduce difficulty.

            • Sanctusx2
            • 13 years ago

            Don’t mean to intervene, but I would never consider CS:Source (a graphically enhanced version of a very very old game) to be a good example of either network/server coding, nor AI, nor hit detection. Those were never its strong points. You can easily find games that pre-date Counterstrike alpha that had better hit detection and a more robust network layer.

            It seems to me like you’re specifically frustrated with the fact that they haven’t updated/modified Counterstrike rather than being unhappy with the latest developments in multicore utilization.

            edit: Sorry, replying to Bensam not Boots.
            edit again: Hey I guess I’m just agreeing with what you stated already. I suppose Valve just has different priorities for their games. They don’t seem to believe such things are a high priority(or are best left up to the people their customers) and are instead are more interested in developing their engine.

            • fatpipes
            • 13 years ago

            It’s definately Valve putting priority on their game engine, which is completely understandable. The problems you’re talking about are pretty narrow. Multi-core optimization is key for Source to remain a leading game engine. You may think a company like Valve is obligated to its end-users, but at this point, I think Valve is more into larger context content delivery, engine licensing and supporting their licensed developers.

            Sure it’s shitty that a company would neglect those users that helped grow their company, but as someone said before, they’re looking at the future. The more resources available to future Source games means a greater design budget for more creative and impressive mechanics. Making this available now is really a priority as there are always new games being developed on Source and I’m sure their developers are howling to have this done before they start production on their own projects.

            You may be paying $50 a box, but those developers are investing millions and paying a fraction of that to Valve to make you want to spend $50 on that box. The game of business has always eclipsed the consumer.

            • Bensam123
            • 13 years ago

            CS randomizes shots my ass. Have you ever played the game? You know the first shot out of just about any gun is dead on center right? Source is retarded like that. You go and play any other game and its randomized from the first shot to the last (depending upon variables like movement, injury, speed, acceleration), but you play source and the first shot always seems to hit dead center no matter what. I guess they developed it to be that way, no need to fix it.

            I believe hit detection and net code aren’t super secret but I’m guessing, just guessing that each company has to develop their own version and depending on how they do it, depends upon how good the implementation is. Unless there is a open source version that everyone uses that I don’t know about.

            I also believe having a firm foundation is essential for driving forward in the future, taking two steps back to take one forward is also essential (the Core 2 Duo is a prime example of it). Going back and fixing mistakes, optmizing code (which isn’t really fixing a mistake) and overhauling parts of the engine to match the times is part of making a good one.

            Yes adding more expansive AI and better physics to the game would drive it forward but it is also redundandt. Other companies are doing it and are doing a better job then Valve could ever do because they’re specializing in just that area. So why shouldn’t they spend those resources on something productive?

            Don’t mis-understand me here, I’m not talking about multi-threading source, in contrast I think thats a good idea. I’m talking about the part where they’re taking on AI and Physics to a greater extent ‘just cause they can’.

            • BooTs
            • 13 years ago

            CS randomizes shots. Take a game like Quake3. No shots are randomized. No lucky hits or unlucky misses.

    • VTOL
    • 13 years ago

    About time.

      • Logan[TeamX]
      • 13 years ago

      Seconded. I’ll have a reason to reinstall CS:S and HL2 then.

    • DrDillyBar
    • 13 years ago

    Kudos to Valve for being the first to really devote energy to multithreading. Looking forward to the updates to the engine.

      • stdPikachu
      • 13 years ago

      Haven’t Epic been working on a multithreaded (AND multiplatform) Unreal engine for the past two years?

        • StashTheVampede
        • 13 years ago

        Unreal3 engine has shipped with Gears of War — it’s a great presentation on the 360. Carmack stated he has one more engine in him — multithreaded must be a heavy focus.

Pin It on Pinterest

Share This