The most logical place to begin our journey is with a look at memory subsystem performance. These results will quantify the speed of our system's memory before we dive into application and gaming tests to determine where the extra oomph matters.
Running DIMMs at a higher frequency boosts memory bandwidth—shocking, I know. Stream measures a nice increase in bandwidth going from 1333 to 1600 and 2133MHz. The rise in bandwidth between 1333 and 1600MHz is nearly linear, and our 2133MHz config doesn't lose too much ground on account of its looser timings. Jumping from 1333 to 2133MHz is good for more than a 50% increase in Stream memory bandwidth.
Memory frequency matters quite a bit more than latency in this test, as our two 1333MHz results make plainly clear. Tightening timings from 9-9-9-24 to 7-7-7-20 only increases memory bandwidth by a few percent.
In a specific test of memory access latency, tighter timings produce a more substantial gain. Frequency still reigns supreme, though. Our 1600MHz config is a few nanoseconds quicker than the best we managed at 1333MHz. As one might expect, access latencies are even faster when the DIMMs are cranked up to their top speed.
Before we dip into common desktop applications, let's drag out something from the always exciting field of scientific computing. We've always found the Euler3d computational fluid dynamics test to be particularly responsive to improvements in memory subsystem performance, but does that trend hold with Sandy Bridge?
In a word, yes. Memory frequency is still the biggest determining factor, but latency also plays a big role. Migrating from 1333 to 1600MHz with the same timings yields a half-point increase in the Euler3d score. The much bigger jump from 1600 to 2133MHz produces the same magnitude of a performance increase, suggesting that the 2133MHz config's looser timings are holding it back. We also see a nice little boost in performance moving the 1333MHz setup to tighter timings.
Just ten milliseconds separate our four configs. The low-latency DDR3-1333 config fares the best here, while the 2133MHz scores the worst. Those results suggest that tighter timings are more important than a higher frequency, but the scores are really too close to call.
Scores remain close in 7-Zip. We have nearly a dead heat in the decompression test, and the compression results show some favor for higher memory frequencies. We're not seeing anything close to the gaps observed in our memory subsystem tests, though.
The x264 video encoding benchmark doesn't do much with the extra bandwidth provided by our faster memory configs. Instead, it does a little. Raising the memory frequency and tightening timings both improve performance by small margins. However, splurging on fancy DIMMs isn't going speed your encoding times dramatically.
It's not going to do anything for file encryption performance, either—at least not with TrueCrypt.
Our Cinebench scores suggest that the Core i7-2600K isn't bound by memory speed when crunching single- or multithreaded rendering workloads.
|Micron's M600 SSD accelerates writes with dynamic SLC cache||10|
|Microsoft intros equal-opportunity Bluetooth keyboard||17|
|Nvidia gears up for Game24; AMD asks fans to crash the party||66|
|Rumored Nexus 9 tablet may have its own keyboard||9|
|Microsoft plans Windows event on September 30||11|
|32GB Shield tablet with LTE goes up for pre-order||6|
|Adata's Premier SP610 solid-state drive reviewed||22|
|The TR Hardware Survey 2014: What's inside your main desktop PC?||358|