The most logical place to begin our journey is with a look at memory subsystem performance. These results will quantify the speed of our system's memory before we dive into application and gaming tests to determine where the extra oomph matters.
Running DIMMs at a higher frequency boosts memory bandwidth—shocking, I know. Stream measures a nice increase in bandwidth going from 1333 to 1600 and 2133MHz. The rise in bandwidth between 1333 and 1600MHz is nearly linear, and our 2133MHz config doesn't lose too much ground on account of its looser timings. Jumping from 1333 to 2133MHz is good for more than a 50% increase in Stream memory bandwidth.
Memory frequency matters quite a bit more than latency in this test, as our two 1333MHz results make plainly clear. Tightening timings from 9-9-9-24 to 7-7-7-20 only increases memory bandwidth by a few percent.
In a specific test of memory access latency, tighter timings produce a more substantial gain. Frequency still reigns supreme, though. Our 1600MHz config is a few nanoseconds quicker than the best we managed at 1333MHz. As one might expect, access latencies are even faster when the DIMMs are cranked up to their top speed.
Before we dip into common desktop applications, let's drag out something from the always exciting field of scientific computing. We've always found the Euler3d computational fluid dynamics test to be particularly responsive to improvements in memory subsystem performance, but does that trend hold with Sandy Bridge?
In a word, yes. Memory frequency is still the biggest determining factor, but latency also plays a big role. Migrating from 1333 to 1600MHz with the same timings yields a half-point increase in the Euler3d score. The much bigger jump from 1600 to 2133MHz produces the same magnitude of a performance increase, suggesting that the 2133MHz config's looser timings are holding it back. We also see a nice little boost in performance moving the 1333MHz setup to tighter timings.
Just ten milliseconds separate our four configs. The low-latency DDR3-1333 config fares the best here, while the 2133MHz scores the worst. Those results suggest that tighter timings are more important than a higher frequency, but the scores are really too close to call.
Scores remain close in 7-Zip. We have nearly a dead heat in the decompression test, and the compression results show some favor for higher memory frequencies. We're not seeing anything close to the gaps observed in our memory subsystem tests, though.
The x264 video encoding benchmark doesn't do much with the extra bandwidth provided by our faster memory configs. Instead, it does a little. Raising the memory frequency and tightening timings both improve performance by small margins. However, splurging on fancy DIMMs isn't going speed your encoding times dramatically.
It's not going to do anything for file encryption performance, either—at least not with TrueCrypt.
Our Cinebench scores suggest that the Core i7-2600K isn't bound by memory speed when crunching single- or multithreaded rendering workloads.
|Linux gathers steam with CryEngine port, Valve's DX-to-GL translator||64|
|Valve VR engineer moves on to Oculus||9|
|Titanfall PC includes 35GB of uncompressed audio||152|
|New Microsoft brass 'extremely committed' to the Xbox||32|
|Surface Power Cover extends run times with second battery||35|
|Need a little more help...||23|
|iOS 7.1 aims to atone for iOS 7's shortcomings||67|
|Sony, Panasonic cooking up 1TB optical discs||71|