Etc.

Wow. Long week here in Damage Labs, and I’m pretty shattered as a result. We were working late last night, recording the podcast, which should be released over the weekend. Some good discussion on this latest one, I think.

On top of that, I’ve been working on a server-related review, logging long hours to get complex benchmarks like SPECjbb running properly and optimally tuned. I’m using almost entirely new hardware and software this time around, including the collector machine for SPECpower, so the setup work has taken days. The good news there is that I’m now producing valid results on multiple machines, so much of the toughest effort is behind me. Still ahead: running our virtualization benchmark and the addition of some hotly anticipated new arrivals from Austin. We’ll have to see how this all works out in terms of publication timing, but I’m hoping we’ll have some nice articles about this class of CPUs soon.

The rest of my work queue looks kind of insane, with too many projects in flight and none ready for publication. The Core i7-3820 is largely tested, but I need to overclock it and prep the article. My new GPU rigs are fully configured and waiting for my attention. The parts for new desktop CPU test rigs have largely arrived, as well. Also, we’re cooking up something fun related to that MSI open-air case we showed you recently. I just don’t know how much will get done before I’m off to another press event shortly.

One item of note: if you have suggestions for programs we could use as desktop CPU benchmarks, now is the time to offer them. We don’t revamp our CPU suite all that often, since we like to build up lots of results for comparison, but the window is now open.

Some updates and new additions are already in the works, including a multithreaded code-compiling benchmark we’ve developed. We have new versions of MyriMatch and picCOLOR, old stalwarts whose authors have worked with us over time to optimize for new architectures and such. I’m open to additional suggestions, including perhaps a new Folding@home benchmark from notfred? (Hint, hint.) We’re happy to consider particularly CPU-bound games to test. I think Skyrim has already earned a spot, along with BF3 perhaps, but I’m eager to hear your thoughts.

And yes, we will be going inside the second with our CPU game testing. In fact, I have some nifty new visualization tweaks I’d like to try. I think we can potentially offer a more complete and easier-to-read map of the frame latency picture. Should be interesting.

Comments closed
    • evilpaul
    • 8 years ago

    I’m not sure how broad the audience interest would be, but I’d kind of like to see some Playstation 2 emulation benchmarks. A lot of games are getting “remastered” or “HD versions” like the God of War games, ICO/Shadows of the Collossus, Final Fantasy X, Beyond Good and Evil, etc. But these are essentially just doing on a PS3 what you could already do on a fast PC with PCSX2. The Final Fantasy X intro seems to be the community standard and I think it would be interesting to see the TR frametime stuff as it applies to various hardware.

    • CityEater
    • 8 years ago

    I know this has been discussed before but if there was anyway you could test some professional video apps like Creative Suite that would be useful to me. I’m sure several of us are running video workstations and as I regularly do long renders 2hr+ with cores and ram maxed out then small performance gains reap good benefits for me in terms of my time.
    Maybe some pressure could be put on Adobe to provide you with a free copy of AE or something? I know this is a big ask and has come up before and I appreciate all the work you guys do. I thought another way of working around having to buy extremely expensive software would be to use something from the RED software packages but I’m not sure how optimized they are.
    Conversion from 4k 16:9 to an interlaced SD 10bit file should put the sweat on.

    Long time reader (and listener) first time poster…

      • mthguy
      • 8 years ago

      1.) welcome to TR, glad you decided to finally post.

      2.) a second for AE and/or Premier.

      3.) which RED software packages are you referring to?

        • CityEater
        • 8 years ago

        Thanks, been reading since 2002 or something so its a long time coming. I forget which is which with the RED software. Is it Red Alert, the one where you can batch convert and conform. I must admit I never deal with much of the RED software anymore since it went native in CS4 but I think I used one of them to batch convert into 1080p TIFF sequences a few years ago and a couple hours of footage churned overnight for some epic 4-5 hour render, maybe more I don’t quite remember.
        Plus this software is free and I’m sure some of us can provide a nice chunky file to convert but I don’t know how optimized it is for multicore. Possibly very optimized, I could have a look if Scott was interested.
        I just think people investing in these workstation parts are likely to be putting it to workstation applications and video workstations (at least mine) get a thorough work out fairly regularly.
        Its a shame media encoder doesn’t work without the other software installed otherwise you could create a fairly complex AE project together without the need for any large video resources behind it and let the computer go to work. I’m looking for something that’s taking 1-2 secs a frame and turns your computer into a blow heater.
        On a side note it is amazing how un-optimized the creative suite packages have been and continue to be for 90% of professional tasks.

          • CityEater
          • 8 years ago

          Actually thinking laterally could you use something open source like light works? I’ve never used it (and don’t know anyone who has) but it should be able to stress anything you throw it provided it has a broad range of format and codec support. Simple video to video conversion from two distinctly different formats should have smoke pouring out of Damage labs before long.

    • MadManOriginal
    • 8 years ago

    For audio encoding please set up a program that is able to run > 2 threads. It’s not uncommon or hard to do and while audio encoding isn’t a big deal any more for small batches (encoding a large library does take a while still) it may show some interesting results between architectures or at least show an advantage for more cores or threads.

    > 2 threads can be set up in foobar2000 for one option.

      • BobbinThreadbare
      • 8 years ago

      You can set more threads, but does it actually use them?

    • Jason181
    • 8 years ago

    I’m not sure how interested others are (feel free to chime in!), but I’d love to see some very heavy Excel computations. Even more interesting, but probably asking for too much, would be a heavily threaded Excel computation and a highly serial Excel computation.

    Not many office programs bring a computer to its knees, or take some minutes to run, but those hefty spreadsheets can take several minutes.

    Another possibly interesting idea would be to do a cpu utilization trace on some of your benchmarks to determine how much threading is actually going on. It might shed some light on how much performance is coming from the clockspeed versus the number of cores.

    Finally, this might be a longshot, but you might investigate the effect of hyperthreading (or the fpu-sharing of bulldozer modules) on application performance. This might be especially enlightening combined with inside-the-second game testing.

    Whatever you guys do, I’m sure it will end up working out well.

    • ghjtdge
    • 8 years ago
    • Alchemist07
    • 8 years ago

    It would be great to see a good [b<]MULTI TASKING benchmark[/b<]. With the latest processors we now have between 2 to 8 cores available. Would be great to see how efficient these processors are (sandy/Ivy bridge / bulldozer) at running 3+ applications at the same time (frame rates/performance/power consumption). For example, If using eyefinity I would have 3 screens, 1) Running a game. 2) Browsing the internet with 6+ tabs open (Chrome/Firefox) 3) Streaming/Playing a Video OR some other application/task such as encoding, winzip etc... [b<] Point being I always do 1 and 2, and sometimes some other thing (3) at the same time.[/b<] Im always surprised that there are no benchmarks of such multitasking, it would be great to see what processors multi task better and if there any benefit in having extra (8) cores? I don't think at all the convention of testing one application at a time really reflects what most users do with their PCs. Example: First person shooter/Chrome 10 tabs (2 Video Streams)/Ripping DVD/playing music

      • BobbinThreadbare
      • 8 years ago

      Your final example seems a little out there, but I guess if the idea is just to push machines to their limits it might work.

      • OneArmedScissor
      • 8 years ago

      [quote<]I don't think at all the convention of testing one application at a time really reflects what most users do with their PCs.[/quote<] That may be partly true, but they're not running more than one CPU intensive application. For example, no matter how avid of a gamer you are, you don't play two games at the same time. :p So leaving a bunch of web pages open (static), playing music (doesn't take a CPU out of idle), ripping a disc (very low transfer speed), and potentially even playing videos (GPU acceleration) isn't a significant strain on the CPU. If you are zipping or unzipping something, that isn't really persistent enough. Encoding, maybe, but much as with GPU acceleration, now there's Quick Sync. Some sites did actually test things like that years ago. The trouble now is that you'd have to invent some very outlandish scenario.

        • Alchemist07
        • 8 years ago

        Not sure, what else do people do to push their CPUs to the limit? I remember back in the day my PC used to struggle with playing a game (counter strike source) and watching TV on a separate monitor via an external USB TV card (Nebula brand), it was a core2duo

      • Bensam123
      • 8 years ago

      I’m going to second this… especially if you play Minecraft or MMOs, you’re always alt tabbing and doing other things in the background.

      Adding to this, a lot of people have a second monitor set up (doesn’t even need to be eyefinity) with other apps open on. I for instance have all my IMs, argus monitor, various gadgets, and sometimes my web browser open on it while doing something on my main monitor. While they don’t use a lot of processing power in and of themselves, there are quite a few open. How well the computer balances and handles the workload does matter to me, which may actually be a key point for AMD cpus as they have a more decked out version of HTing.

      Like Alchi said, a lot of sites do the whole ‘lets test the shit out of this game’ approach, but they don’t take into account how many other things many users have going on in the background, which really are confounding variables. It may not seem like the most logical approach to get the absolute best results for one specific application, but people aren’t always logical nor do they do only one thing at one time.

    • Mr Bill
    • 8 years ago

    This may be a moot couple of questions…
    What about running some typical background tasks that would be common and seeing if that affects frame rates; particularly “inside the second”. I just started running VLC in the background playing music while I play WOW. I get the impression it does not affect play much. But are there other programs that one would have running, that might?

    Secondly, I have one program (not a game) that I have to run in VMPlayer on my Win7 laptop. Are some virtualization machines better than others with respect to gaming? Are there people out there running games in virtual machines while the main part of the cpu processing is allocated to another task?

      • BobbinThreadbare
      • 8 years ago

      “Are there people out there running games in virtual machines while the main part of the cpu processing is allocated to another task?”

      I’m trying to figure out why someone would do such a thing.

        • yokem55
        • 8 years ago

        Well, it isn’t too uncommon for people to setup their old DOS games in a VM.

          • BobbinThreadbare
          • 8 years ago

          Ok, but do we need benchmark results run on this?

        • LaChupacabra
        • 8 years ago

        My ideal setup would be to run a hypervisor on my computer. This would let me switch between Windows 7, 8, XP, Ubuntu, Fedora or any number of Server OS’s. The only reason I don’t do this is I still like to play videogames. I would be willing to spend a chunk of money on a hypervisor-like product that had good gaming performance.

    • CampinCarl
    • 8 years ago

    So, I asked about this on the forums (https://techreport.com/forums/viewtopic.php?f=20&t=80438) but no one bit. So maybe I’ll drop this question here:

    Damage: Has there ever been a discussion about implementing a feature on the website similar to Anandtech’s ‘Bench’ feature (i.e. compare singular pieces of hardware that were tested on the same foundation, etc)?

      • TaBoVilla
      • 8 years ago

      this is helpful but heavily time consuming, they would need additional people dedicated only for this task

        • CampinCarl
        • 8 years ago

        See, I’m not sure that they would, beyond the initial development. As long as everyone is formatting whatever they store data in (excel, I assume) the exact same every time, it shouldn’t really need to be modified aside from bug fixes. After that, assuming there was enough forethought put into it, it should be able to just be run once after they finish a test (say, right before they post the article) to update the database on the website.

        Maybe I’m just overly optimistic, but I think that TR could find enough volunteers to do the job (I wouldn’t mind doing a lot of the legwork on the data into database stuff (though I admit that might be the easiest part)).

      • Bensam123
      • 8 years ago

      Yeah, I made a post about having a retroactive price chart that compares market prices of HDs a few months ago too, but that got the ‘that sound like too much work’.

      Really cool stuff… looks like it’s beyond TRs scope, sadly.

    • anotherengineer
    • 8 years ago

    “We’re happy to consider particularly CPU-bound games to test”

    Isn’t the source engine fairly cpu intensive? The soon to be released cs:go is apparently supposed to run an updated version of the source engine, however there are quite a few other games that use the source engine also.

    Other than that I would have to say I would like to see more real world tests, and things like USB 3.0, chips and drivers vs performance, etc.

    And is there any software to take advantage of the radeon 7XXX series video encoding/decoding and any new folding software?

    • BobbinThreadbare
    • 8 years ago

    Can you guys look at Total War: Shogun 2?

      • Zoomastigophora
      • 8 years ago

      This.

        • Aloeus
        • 8 years ago

        thirded

    • OneArmedScissor
    • 8 years ago

    It would be really nice if you used a different power test than Cinebench. Nobody leaves their computer running that all the time. :p

    Power use while playing a game would be more applicable to most people, even if it’s not a 100% CPU load, but how many things really are? Even when the task manager says the CPU is pegged, it’s typically nowhere near as far as, say, Linpack will push it.

    I could also live with video encoding.

    • Duck
    • 8 years ago

    CPU test you could do time taken to decompress a CD’s music as flac files and then encode them with Nero aac encoder at Q0.40

    • OneArmedScissor
    • 8 years ago

    For new desktop CPU benchmarks, Dawbench:

    [url<]http://www.dawbench.com/benchmarks.htm[/url<] This will show the limitations of any CPU, no matter how powerful, and [b<]in the real world[/b<], with an application a person can actually use. There is nothing simulated, synthetic, or hypothetical about it, and no real performance ceiling. Rather than being an isolated exihibt of just one performance aspect, it's one of very few things which exhibits tangible changes in a combination of multi-threading scaling, [i<]and[/i<] memory bandwidth, [i<]and[/i<] system latency from one CPU design/platform to another. While it may not be applicable to everyone, I would argue that most benchmarks are totally faked and applicable to no one at all. At least this is real, and leaves no uncertainity whether one CPU/platform can outperform another at its peak.

    • codedivine
    • 8 years ago

    [quote<] including a multithreaded code-compiling benchmark we've developed [/quote<] Great! Much appreciated! Suggestion/Request: Some blender related benchmark.

      • DancinJack
      • 8 years ago

      Yeah I got excited too.

      • Flatland_Spider
      • 8 years ago

      Agreed. Hard data is better then my own guesstimates.

      • srg86
      • 8 years ago

      Agreed, this is what I do with my computers more than anything else to this one will be very interesting. It would be great if there was a GCC one as well as Visual Studio, though I’ll take any I can get!

        • DancinJack
        • 8 years ago

        GCC would be nice indeed.

      • chuckula
      • 8 years ago

      YES! Compiling is one area where multiple CPU cores can really give you a boost, and it’s an area where a non-trivial portion of the TR community actually has an interest beyond just bragging rights.

      I give a big thumbs up for parallelized compiler benchmarks in general, and GCC benchmarks in particular.

      • yokem55
      • 8 years ago

      They ought to try setting up an Android ICS build environment. The requirements supposedly include 16+gb of ram, a fast ssd and lots and lots of cpu cores in order to build it in a reasonable amount of time.

    • Arclight
    • 8 years ago

    FIIII1111111111rrrrrsssssssttttttttt, now let me RTFA.

    Edit:
    [quote<]And yes, we will be going inside the second with our CPU game testing[/quote<] Don't you mean GPU testing? Or are you suggesting that CPUs influence GPU results for "inside the second" benchmarks? If that's the case, it hasn't been said before afaik.

      • Damage
      • 8 years ago

      CPUs play a role in gaming performance, too. Our methods test gaming performance, so….

        • Arclight
        • 8 years ago

        I thought inside the secound was more depandant, well on the GPU. As for average minimum and maximum fps i understood it’s very dependent on CPU but only now i understand that video cards exhibiting “lag” can perform better or worse in this metric depending on the CPU used. Is that what i should understand? Please excuse my newbyness/noobness i just want to understand.

          • Damage
          • 8 years ago

          We’re simply measuring game performance in terms of frame production times. Both the CPU and GPU are involved in the production of each frame of a game. The CPU portion involves both driving the GPU (including the real-time HLSL compiler) and tracking/updating other things, like AI, physics, user input, networking. Heck, the entire physical simulation of the game world has traditionally happened on the CPU, with the GPU just creating the visuals. Usually, as I understand it, game engines have had a main timing loop based on frame production. Everything gets updated once and then you increment to the next frame.

          A lot of newer games aren’t terribly CPU-bound since they have to run on the dreadfully slow console processors, but there are games that require more CPU power–either because they’re PC-centric and offer richer simulations or because they’re lousy console ports. Either way, we can measure the impact of CPU performance on frame times with the same basic methods we use for GPU testing simply by keeping the GPU the same, testing at a non-GPU-bound resolution, and swapping in different CPUs.

            • Stargazer
            • 8 years ago

            On the topic of CPUs and inside the second testing, have you looked at the effect of Hyper-threading?

            Scheduling can sometimes end up being sub-optimal on Hyper-threading enabled processors, and it would be interesting to see if/how much this would effect frame rates on small time frames.

            • Damage
            • 8 years ago

            No, not yet, but….

            • Stargazer
            • 8 years ago

            I’m intrigued.

        • Bensam123
        • 8 years ago

        Aye, I was waiting for this… specifically the influence of HTing like implementations and microstuttering. I actually shut HTing off on my i7 because it made my games seem less fluid. Playing games after it was off seemed to make things quite a bit more fluid and more enjoyable… Could be a plecebo, might be a real thing… That’s why they pay hardware testers big money to figure it out. XD

Pin It on Pinterest

Share This