Futuremark unveils new VR benchmarks and Servermark tests

Futuremark, best known for the wildly-popular 3D gaming prowess test 3DMark, is at both the Mobile World Congress event in Barcelona and the Game Developers' Conference in San Francisco. The company is showing off is latest demo enhancement to VRMark, dubbed the Cyan Room. Futuremark is also talking up its forthcoming mobile VR test for VRMark, and a new type of benchmark intended for servers called Servermark.

When VRMark came out, it included an Orange Room test and a Blue Room test. The Orange Room is a relatively low-intensity test that measures a PC's suitability for VR by comparing it to the baseline requirements for the Oculus Rift (which are very similar to those for the HTC Vive.) Meanwhile, the Blue Room test is much more demanding and indicates that the PC is suitable for the latest and most demanding VR games and applications. Both of these tests use DirectX 11.

By contrast, the new Cyan Room benchmark uses DirectX 12 and is intended to be a showcase for the new API, like the Time Spy test the company released before for 3DMark. Futuremark says the new Cyan Room test will demonstrate how DirectX 12 can make even a modest system suitable for "impressive VR experiences." As with the other tests, you don't need a VR headset hooked up to run the benchmark. You do however have to buy VRMark, which is $20.

Futuremark also says it has "new benchmark tests" coming in the near future that are designed specifically for mobile VR platforms like the Samsung Gear VR. The company says the tests will be geared towards measuring both best- and worst-case performance so that users can see what their device is capable of, and what happens when it's been under heavy load for a long time. Mobile VR benchmarking software isn't very widespread, so Futuremark's addition is certainly welcome.

Finally, Futuremark is also showing two of the tests for its Servermark software at GDC and MWC. The Servermark VDI test is a benchmark intended to determine how many virtual desktops a server can support without performance degradation. Meanwhile, the Servermark Media Transcode test, like its name implies, measures a server's media transcoding chops. It's quite interesting to see Futuremark stepping beyond the gamer-focused 3DMark software.

Comments closed
    • nerdrage
    • 2 years ago

    I can’t wait to play it!

    • CuttinHobo
    • 2 years ago

    For some reason I don’t expect many sysadmins to rely on Servermark scores that aren’t likely to correlate to real world performance.

    “How many Stanley Nickels per Schrute Buck?”

    “The same ratio as leprechauns to unicorns.”

      • Redocbew
      • 2 years ago

      I wish I had more than three up-thumbs to give.

      I understand what they’re trying to do here, but having “servermark” around can only complicate the job of administering a server when there isn’t a real sysadmin on staff who already knows what the hardware is capable of handling.

        • BurntMyBacon
        • 2 years ago

        [quote=”Redocbew”<]I wish I had more than three up-thumbs to give.[/quote<] Let me help you out a little.

      • slowriot
      • 2 years ago

      There’s room for them to build an interesting suite of tools here. I don’t know if that’s what they’ll do but there’s an opportunity if done right. If they created a bundle of tools that could help admins and engineers diagnose issues and identify bottlenecks then maybe. But I’m not sure Futuremark has any of that in mind.

        • BurntMyBacon
        • 2 years ago

        I could see this being useful as a standard metric that could be used as a quick and dirty estimate of performance that would help manufactures and clients narrow the scope of systems to evaluate. At the end of the day, however, a competent admin will identify the bottlenecks in the current setup using the actual workload that the new system will be using and focus on alleviating said bottlenecks. It doesn’t really matter how well a system performs in an arbitrary workload, no matter how “Real World” it gets, as long as it works well in the workload that it will be presented with in production.

        In the home environment, a generic multi-discipline evaluation can be useful as the tasks of a Desktop or even Workstation PC can vary from day to day and its primary use case can change over time. Servers, however, are generally implemented for a specific purpose and will continue to service that purpose until they are decommissioned. A generic multi-discipline evaluation is far less useful here.

      • Klimax
      • 2 years ago

      There aren’t that many benchmarks for serves available. Depending on what’s included it could be interesting.

        • AnotherReader
        • 2 years ago

        The right benchmark for a server is your own workload.

          • ColeLT1
          • 2 years ago

          I’m not disagreeing at all, but what if the benchmark gives a (large) list of different workload metrics where you pick use scenarios and then it gives you scores for those scenarios.

          For example, you run the benchmark and it runs 100 tests, you only care about 2 of those, you hit the checkboxes and bam, there is your score for what you care about. Compare differently spec-ed out servers (more cores, less cores, more ghz, less ghz, different sockets, different brands, etc) Then you can have something useful.

    • Neutronbeam
    • 2 years ago

    Is the Green Room where everybody waits for the other tests to get started?

      • Voldenuit
      • 2 years ago

      The Red Room is where you fond out which suit burns better.

Pin It on Pinterest

Share This