Comparing the incomparable
Comparing performance and image quality across three video conversion applications using different hardware presents some inherent challenges. For starters, equalizing encoding settings is difficult, especially with user-friendly apps like MediaEspresso that obscure many of the more advanced encoding parameters. Then there's the issue of the hardware itself. If hardware transcoders don't produce identical output, then is their performance really directly comparable?
In the end, we decided to keep things simple. We grabbed a 1080p version of the Spiderman trailer, which weighed in at 177MB, and we picked a basic set of common settings for all the encoders to use. We chose to downsample the video to 720p, at a bitrate of 4000Kbps, with a constant frame rate of 24 FPS. The audio was converted to 128Kbps, 44.1kHz, stereo AAC. The idea was to replicate a common usage scenario: shrinking a high-def video to fit on a mobile device, like a modern smartphone or tablet. Those devices may have enough power to decode 1080p video, but their storage space is limited, and they usually lack the display resolution to render a full 1080p image. (Apple's new iPad and Asus' Transformer Pad Infinity are notable exceptions.) Our goal was to find out which application gave us the highest-quality output in the least amount of time.
We did come across one little kink, which was that MediaEspresso seems only to allow 720p output with letterboxing. In other words, it renders the 2.35:1 frame inside a taller, 16:9 frame with black bars. MediaConverter supports both native and letterboxed modes, while Handbrake has no letterboxing option that we can see. Since our build of Handbrake differs from the other encoders in that it uses OpenCL instead of hardware black boxes, we enabled letterboxing on MediaEspresso and MediaConverter and left Handbrake in native mode.
On the hardware side of things, we had our bases covered. The Core i7-3770K processor provided not just the latest iteration of QuickSync, but also Intel's new HD Graphics 4000 IGP, whose shaders can be programmed using OpenCL. The GeForce GT 640 and Radeon HD 7750 gave us NVENC and VCE hardware blocks, respectively, in addition to support for shader-based encoding using OpenCL or other APIs. We were particularly interested to see how much of a speedup these $100 GPUs could provide over a fast CPU running on its own.
We should mention one last caveat before we go on, which is that the GeForce GT 640 has substantially less memory bandwidth than the Radeon HD 7750. We outlined the differences in our review. To make a long story short, the GeForce's disadvantage is due to its use of slow DDR3 memory, and that slow RAM may have affected performance in our testing. As far as we saw, however, the GeForce wasn't at a substantial disadvantage in any of our tests; it actually outperformed the Radeon by a good margin in one of them. We'd have loved to test another GeForce, but this is the only retail card with the NVENC encoding block south of $399 right now.
Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we reported the median results. Our test system was configured like so:
|Processor||Intel Core i7-3770K|
|Motherboard||Asus P8Z77-V LE Plus|
|North bridge||Intel Z77 Express|
|Memory size||4GB (2 DIMMs)|
|Memory type||Kingston HyperX KHX2133C9AD3X2K2/4GX
DDR3 SDRAM at 1333MHz
|Memory timings||9-9-9-24 1T|
|Chipset drivers||INF update 220.127.116.119
Rapid Storage Technology 18.104.22.1682
|Audio||Integrated Realtek audio
with 22.214.171.12402 drivers
|Graphics||Intel HD Graphics 4000 (integrated)
with 126.96.36.19961 drivers
|AMD Radeon HD 7750
with Catalyst 12.7 beta drivers
|Zotac GeForce GT 640
with GeForce 304.79 beta drivers
|Hard drive||Samsung 830 Series 128GB|
|Power supply||Corsair HX750W 750W|
|OS||Windows 7 Ultimate x64 Edition
Service Pack 1
Thanks to AMD, Asus, Corsair, Kingston, Intel, and Zotac for helping to outfit our test rigs with some of the finest hardware available.
Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.
We used the following test applications:
We measured total system power consumption at the wall socket using a P3 Kill A Watt digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.
The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
|Samsung's 28'' display serves up single-tile 4K at 60Hz for $800||76|
|AMD posts another loss but beats Wall Street forecast||22|
|GlobalFoundries licenses Samsung process tech, grants AMD access to FinFETs||55|
|MSI shows next-gen Intel motherboards||35|
|Micro-bots are spooky cool, could be used in manufacturing||22|
|Nvidia GeForce 337.61 beta hotfix display driver released||14|
|AMD earnings previewed||32|
|Ars Technica reviews Windows Phone 8.1||51|
|Wait, we're giving away $1500 in PC hardware?||11|