Exploring the impact of memory speed on Sandy Bridge performance

For a moment, pretend that Intel’s 6-series chipset bug doesn’t exist. Turn the clocks back a couple of weeks and bask in the afterglow that followed the launch of Intel’s Sandy Bridge CPUs. This long-anticipated architectural refresh brought improved performance, lower power consumption, and surprisingly competent integrated graphics to a swath of mid-range processors. Indeed, we were so impressed that we handed out Editor’s Choice awards for several Sandy Bridge models—something we haven’t done for several generations of new CPUs.

Regardless of the fact that a single transistor can sink the long-term 3Gbps Serial ATA performance of the associated 6-series chipsets, Sandy Bridge remains the bomb. The 6-series chipset bug is just that—a problem with the chipset that only reflects poorly on the processor because both must be present for a system to run. Motherboards based on a new chipset stepping are due in a couple of months, and all indications suggest that Intel’s latest CPUs will be just as attractive then as they were just a couple of weeks back.

Enthusiasts contemplating a Sandy Bridge build are best off with one of Intel’s K-series CPUs: either the Core i5-2500K or the Core i7-2600K. The former offers four cores with a 3.3GHz base clock speed and a 3.7GHz Turbo peak, while the latter kicks those clocks up by 100MHz and throws in Hyper-Threading for good measure. By far the most important feature of these K-series models is a set of unlocked multipliers that facilitates easy overclocking.

Standard Sandy Bridge CPUs can only increase their core multipliers by four ticks above the default, putting a hard cap on overclocking headroom. More traditional overclocking methods that rely on increasing the base clock speed without touching the multiplier haven’t worked terribly well with Sandy Bridge because most of the CPU’s components key off that base clock. That’s made the K-series parts a must-have for enthusiasts looking to squeeze as much love as possible from their Sandy Bridge rigs.

In addition to allowing core speeds to be tweaked with little effort, the K series’ unlocked multipliers also make it easy to take advantage of faster memory. Standard Sandy Bridge processors may default to a 1333MHz memory clock, but select DDR3 modules are capable of running at much higher speeds. In some cases, you won’t pay much of a premium. Name-brand DDR3-1600 kits start at around $45 for 4GB, which isn’t much more than the cost of equivalent DDR3-1333 sticks. For roughly twice that amount (and very close to what slower DDR3 memory cost only a year ago), you can get your hands on exotic modules rated for operation up to 2133MHz.

Curious to see whether fancy DIMMs are worth the premium, we’ve taken the time to explore Sandy Bridge performance with a range of different memory configurations. Read on to see how memory clock speeds and latencies impact Intel’s latest processor architecture.

Test notes and methods

If you haven’t done so already, I strongly suggest reading our initial coverage of Intel’s Sandy Bridge CPUs. That review puts the performance of Intel’s new hotness in context against a wide range of contemporary competitors, while this article will focus on the impact of memory speed on the Core i7-2600K. To explore that arena, we’re going to need some fancy DIMMs.

Kingston handed us a 4GB kit of its HyperX DDR3-2133 KH2133C9AD3X2K2/4GX memory at CES earlier this year, so we popped it into a Sandy Bridge system and went to town. With low-profile heatsinks and a stately gray aesthetic, the HyperX modules look surprisingly understated for premium memory. Don’t let the reserved exterior fool you, though. Beneath those heatsinks lies an array of DDR3 memory chips rated for operation at frequencies up to 2133MHz. At that speed, you’re looking at timings of 9-11-9-27, which is a little higher than the 9-9-9-24 latencies typical of DDR3-1333 modules.

Frequency and latency combine to influence memory performance, so we’ve tested a number of different combinations. The first is a standard setup with DDR3-1333 at the 9-9-9-24 timings common among inexpensive desktop modules. To see how more aggressive latency settings change the picture, we’ve run another set of tests at 1333MHz but with tighter 7-7-7-20 timings.

We’ll also look at how frequency comes into play with a set of results for the memory running at 1600MHz with 9-9-9-24 timings and at 2133MHz with 9-11-9-27 timings. Although we couldn’t get the system stable at that top memory speed with the 9-9-9-24 timings used for two of the other configs, higher latency settings generally come hand-in-hand with high-frequency modules. An aggressive 1T command rate proved stable with all the configurations, so we used it across the board.

With few exceptions, all tests were run at least three times, and we reported the median of the scores produced.

Processor Intel Core i7-2600K 3.4GHz
Motherboard Asus P8P67 PRO
Bios revision 1204
Platform hub Intel P67 Express
Chipset drivers Chipset: 9.2.0.1019

RST: 10.1

Memory size 4GB (2 DIMMs)
Memory type Kingston HyperX KHX2133C9AD32X2K2/4GB
Memory speed 1333MHz 1333MHz 1600MHz 2133MHz
Memory timings 7-7-7-20-1T 9-9-9-24-1T 9-9-9-24-1T 9-11-9-27-1T
Audio Realtek ALC892 with 2.55 drivers
Graphics Asus EAH5870 1GB with Catalyst 11.1 drivers
Hard drive Raptor WD1500ADFD 150GB
Power Supply PC Power & Cooling Silencer 750W
OS Microsoft Windows 7 Ultimate x64

We’d like to thank Asus, Intel, PC Power & Cooling, and Western Digital for helping to outfit our test rigs with some of the finest hardware available.

We used the following versions of our test applications:

The test systems’ Windows desktop was set at 1280×1024 in 32-bit color at a 60Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory performance

The most logical place to begin our journey is with a look at memory subsystem performance. These results will quantify the speed of our system’s memory before we dive into application and gaming tests to determine where the extra oomph matters.

Running DIMMs at a higher frequency boosts memory bandwidth—shocking, I know. Stream measures a nice increase in bandwidth going from 1333 to 1600 and 2133MHz. The rise in bandwidth between 1333 and 1600MHz is nearly linear, and our 2133MHz config doesn’t lose too much ground on account of its looser timings. Jumping from 1333 to 2133MHz is good for more than a 50% increase in Stream memory bandwidth.

Memory frequency matters quite a bit more than latency in this test, as our two 1333MHz results make plainly clear. Tightening timings from 9-9-9-24 to 7-7-7-20 only increases memory bandwidth by a few percent.

In a specific test of memory access latency, tighter timings produce a more substantial gain. Frequency still reigns supreme, though. Our 1600MHz config is a few nanoseconds quicker than the best we managed at 1333MHz. As one might expect, access latencies are even faster when the DIMMs are cranked up to their top speed.

Application performance

Before we dip into common desktop applications, let’s drag out something from the always exciting field of scientific computing. We’ve always found the Euler3d computational fluid dynamics test to be particularly responsive to improvements in memory subsystem performance, but does that trend hold with Sandy Bridge?

In a word, yes. Memory frequency is still the biggest determining factor, but latency also plays a big role. Migrating from 1333 to 1600MHz with the same timings yields a half-point increase in the Euler3d score. The much bigger jump from 1600 to 2133MHz produces the same magnitude of a performance increase, suggesting that the 2133MHz config’s looser timings are holding it back. We also see a nice little boost in performance moving the 1333MHz setup to tighter timings.

Now, onto some more common desktop applications, starting with the SunSpider JavaScript browser benchmark.

Just ten milliseconds separate our four configs. The low-latency DDR3-1333 config fares the best here, while the 2133MHz scores the worst. Those results suggest that tighter timings are more important than a higher frequency, but the scores are really too close to call.

Scores remain close in 7-Zip. We have nearly a dead heat in the decompression test, and the compression results show some favor for higher memory frequencies. We’re not seeing anything close to the gaps observed in our memory subsystem tests, though.

The x264 video encoding benchmark doesn’t do much with the extra bandwidth provided by our faster memory configs. Instead, it does a little. Raising the memory frequency and tightening timings both improve performance by small margins. However, splurging on fancy DIMMs isn’t going speed your encoding times dramatically.

It’s not going to do anything for file encryption performance, either—at least not with TrueCrypt.

Our Cinebench scores suggest that the Core i7-2600K isn’t bound by memory speed when crunching single- or multithreaded rendering workloads.

Gaming

Games are arguably the most demanding applications that enthusiasts run on their PCs on a regular basis. To find out whether faster memory affects in-game frame rates, we collected a handful of titles and ran them through two sets of tests. The first batch was conducted at a modest resolution and with low in-game detail settings to remove the graphics card as a potential bottleneck. For the latter, we pushed the resolution to 1920×1080 and cranked the detail levels as high as we could while maintaining playable frame rates.

We tapped each game’s built-in benchmarking component to test its performance. All four titles were run in DirectX 11 mode, even when using low detail settings. For Civilization V, we used the full render score, which should be the most representative of real-world performance. That score has been converted to frames per second to make the graphs easier to understand.

At low resolutions and detail levels, we’re not seeing much of a case for faster memory. A higher memory frequency buys a few frames per second here and there, but that’s pretty much the extent of it. Our low-latency DDR3-1333 config doesn’t really separate itself from the pack, either.

With the exception of competitive Counter-Strike players trying to purge any potential for performance hiccups—real or imagined—most folks use the highest resolution and detail levels they can when playing games. That tends to make one’s graphics card the bottleneck, which is why we see even less separation with this round of tests. At best, the difference between our fastest and slowest memory configs amounts to a few FPS.

Conclusions

Even before we consider the results of our performance testing, it’s interesting to note that Sandy Bridge makes higher-speed memory more and less appealing. On one hand, the unlocked memory multiplier present in K-series CPUs makes setting a higher memory frequency almost as trivial as changing any other BIOS setting. At the same time, the fact that base-clock overclocking is essentially a dead end for Sandy Bridge CPUs means that faster memory isn’t required to keep up with higher base clock frequencies. The only reason to buy faster memory for a Sandy Bridge rig is if it’s going to improve performance.

So, is it?

That depends. If you’re running memory benchmarks all day long, then yes, faster memory will improve bandwidth and access latencies substantially. In fact, Sandy Bridge CPUs extract more performance from the same memory configuration than their Lynnfield- and Clarkdale-based counterparts. However, as we learned when exploring the affect of memory speed on the performance of Intel’s first Core i7 processors, finding games and applications that make effective use of the extra memory bandwidth and faster access latencies can be difficult.

Among the tests we ran, only the Euler3d fluid dynamics simulation enjoyed a substantial benefit from faster memory configurations. Video encoding and file compression ran a little bit quicker with higher memory frequencies and tighter timings, but most of our application tests showed little or no improvement in performance. Neither did the games, which only managed to squeeze a few extra FPS out of our fastest memory configuration.

Although there are certainly cases where pairing Sandy Bridge processors with low-latency or high-frequency memory can yield impressive gains, it’s hard to find a common desktop application or game whose performance improves enough to justify the additional expense. If you’re looking to set benchmarking records or to compensate for personal shortcomings, K-series Sandy Bridge CPUs at least make it easy to run exotic DIMMs at blistering speeds. Everyone else can rest assured that using relatively inexpensive DDR3-1333 memory won’t cost them much performance in the real world.

Comments closed
    • DarkUltra
    • 9 years ago

    How about testing 8 and 16GB too, how fast can they run on a Sandy Bridge? I like having a lot of my hd cached. Faster alt-tab and video memory dump.

    • Etienne145
    • 9 years ago

    I’ve got a 2500K / P8P67 Pro setup that I’m trying to squeeze a little more out of the ram with. I can get it stable at 1600 with 8-8-8-24-2T or 9-9-9-24-1T @1.6v. I realize that in real applications there won’t likely be much actual performance difference, but I’m curious which settings would be considered better…

    Anyone have an opinion?

    • evilpaul
    • 9 years ago

    How does it affect power consumption?

    • ltwizard
    • 9 years ago

    No test at 800MHz? Or is this not possible on Sandy Bridge hardware?

      • Stargazer
      • 9 years ago

      Hmm. That would have been interesting.
      If you underclock the memory until it becomes a bottleneck, you get an idea of how much headroom you have.

    • oldDummy
    • 9 years ago

    Good article.
    It never hurts to be reminded or to inform the newly hardware aware.
    Everyday standard low cost memory is fine.

    Memory hype:

    Same as it everwas, same as it everwas, same as it everwas, same as it everwas, same as it everwas…..

    • Damage
    • 9 years ago

    Trymor has been banned for obvious reasons. Don’t try that at home, kids, or you’ll get your very own ban, too.

      • Meadows
      • 9 years ago

      Good god, finally. Thank you.

      • NeelyCam
      • 9 years ago

      [quote<]Don't try that at home, kids, or you'll get your very own ban, too.[/quote<] I wonder if that was directed at me...

        • Meadows
        • 9 years ago

        Don’t tell me you can’t [u<]see[/u<] what Trymor was doing.

          • NeelyCam
          • 9 years ago

          Sure I can, but I’m mostly concerned about bans being handed out in general – this is the first ban I’ve seen in my Techreport freshman year.

          I wonder if there is a comment rule book somewhere..

      • albundy
      • 9 years ago

      you mean that he was completely obnoxious, or that he replied to his multiple personality self several times? lol

      …and kids, you dont even need your parents permission.

    • Meadows
    • 9 years ago

    Damage, seriously, moderate this Trymor person because he’s inflating page and thread sizes for no good reason. While you’re at it, show him the “Edit” button.

    Thank you.

    (PS: introduce a “close thread” button that is remembered per account.)

    • crazybus
    • 9 years ago

    I’d be interested in seeing the effect of faster memory on Sandy Bridge IGP performance, for those niche SFF applications where a graphics card isn’t practical but you still want to extract maximum performance from the system.

      • willmore
      • 9 years ago

      I’d imagine it would be between zero and a very small number. Really, if you need graphical performance, add in a video card. Even a G210 would be an improvement, no?

        • crazybus
        • 9 years ago

        I’m thinking small form factor like those mini-ITX cases that don’t have room for a PCIe card, or where your choice of PSU limits discrete graphics card choices. I don’t know why you think the performance difference would be negligible, if anything can benefit from faster memory it would be the IGP.

    • Trymor
    • 9 years ago

    “2x 1333mhz for the quad and dual core
    3x 1600mhz for the hexa and octo core”

    Might be better not to comment, but just ‘thumb rate’ this as a decent rule of thumb? If I’m wrong, vote me down!

    • Stargazer
    • 9 years ago

    I’d been hoping you’d do an article like this. Thanks for doing it!

    I would have really liked to see some tests under heavy multitasking scenarios though.

    It would also have been nice to have seen 1600 C8 memory in there, since that’s pretty much the smallest step up from 1600 C9 (which I’m pretty much considering the baseline for what I’m planning to buy), but I would expect that the differences between C9 and C8 would have been minimal (to say the least) in these tests anyway.

    It will be interesting to see how usage of AVX will affect this. It should be able to put a higher load on the memory sub system.

    On a somewhat related note, what’s the impact of higher memory speeds when using Quick Sync?

      • willmore
      • 9 years ago

      I agree with your logic on this. I’m not sure if the author quite understands the relations amongst timings, clock speed, latency, and bandwidth.

      Here’s an example:
      At 1333MHz, one cycle is 750 picoseconds.
      At 1600MHz, one cycle is 625 picoseconds.
      At 2133MHz, one cycle is 469 picoseconds.

      So,
      7 cycles at 1333MHz is 5250 ps
      9 cycles at 1333MHz is 6750 ps
      9 cycles at 1600MHz is 5625 ps
      9 cycles at 2133MHz is 4219 ps
      11 cycles at 2133MHz is 5157 ps

      So, you can see that the latency of the 9-11-9-27 by 2133MHz config is lower than even the 7-7-7-20 by 1333MHz config. The phrase “our 2133MHz config doesn’t lose too much ground on account of its looser timings.” is confusing as the 2133MHz config has *tighter* timings when measured in absolute units–and not clock relative cycle values.

      If we had data for 9-11-9-27 for all clock speeds, we’d be able to normalize out the BW effects on these tests and be able to characterize them on their correlation to latency and BW. Heck a whole ‘half triangle’ of timings would be nice. Hey, you guys at Damage Labs, please spend the rest of your lives running tests for us! 😉

      • DarkUltra
      • 9 years ago

      Multitasking! When my computer do a backup run in the eh background while I play a game, would faster memory help?

    • Bensam123
    • 9 years ago

    I made a comment on one of the other reviews about the increasing amount of memory that you can put into computers now. P55 maxes out at 16GB, x58 – 24GB, SB is a whopping 32GB.

    Games rarely use more then 2GB of memory, the OS and all my applications rarely exceeds 3GB in itself, so what do you use with all that extra memory if you actually choose to max it out?

    I really think TR should take a look at Ram Disks. You guys seem to be on top of the whole SSD craze, but I haven’t seen a SINGLE hardware review website with a take on faster storage such as this. 32 GB, taking out 4GB for operating programs, is still a 28GB almost instant access hard drive. 8GB modules are only going to fall in price and 4GB modules are extremely cheap right now.

    Putting this review into perspective, if this review was done after a review about the feasibility now of ram disks, it could’ve also taken a look at performance increases for a ram disks with faster dimms.

    • ste_mark
    • 9 years ago

    Just to show somebody reads the tables, which is correct?
    [quote<]Memory timings 7-7-7-20-1T 9-9-9-24-1T 9-9-9-24-1T [b<]9-11-9-24-1T[/b<] [/quote<] [quote<]We'll also look at how frequency comes into play with a set of results for the memory running at 1600MHz with 9-9-9-24 timings and at 2133MHz with [b<]9-11-9-27[/b<] timings.[/quote<]

    • KoolAidMan
    • 9 years ago

    Once again, fast memory doesn’t make any practical difference in games. I can’t remember a time when this has ever made a real difference, much better to spend the cash on a faster GPU or CPU.

      • AssBall
      • 9 years ago

      I think the last time it mattered was on the Windsor Athlon 64’s. I think it was the first time a mainstream desktop cpu had the memory controller on die. Those things were faster with overclocked low latency RAM. Of course that was when video cards and RAM were a lot less powerful than they are today.

        • NeelyCam
        • 9 years ago

        I had to thumb you up because of your name.

      • Smeghead
      • 9 years ago

      Or, if you must spend the cash on memory, spend it on more, rather than on the quick stuff.

      2GB sticks of vanilla DDR3 are dirt cheap, and 4GB sticks are quite affordable. If I were buying a machine right now, I’d probably just fill it with 16GB (in the case of 2-channel) and call it done. Doing that eats up less than $200, and with that much you’d likely never have to worry about memory again during the system’s lifetime.

      Hell, I pretty much did that when I built my current machine (a core 2 quad) – 2GB sticks were the sweet spot, so I stuffed all 4 slots and walked away. I still haven’t come close to useing it all, so Windows is usually sitting pretty with a big fat disk cache.

    • swampfox
    • 9 years ago

    Seems like spending that extra money to upgrade the graphics card or CPU one more bin makes a lot more sense…

    • Anomymous Gerbil
    • 9 years ago

    It would be nice if TR performance charts were expressed in percentages instead of (or as well as) raw figures, maybe normalised to 100% for the fastest/slowest/base config. The base chart could show raw figures, which then change to percentages when you mouse-over.

    (Yes, this review shows mostly negligible changes between memory configs, but I’m talking in general.)

    • UberGerbil
    • 9 years ago

    Will we see this revisited when those quad-channel server systems show up? ,)

      • Trymor
      • 9 years ago

      Yup.

      • flip-mode
      • 9 years ago

      Do you think quad 1333 or quad 1600 is going to make more of a difference?

        • Meadows
        • 9 years ago

        I rather think it’s the server workloads and the presence of more CPU power (or 2 or even 4 physical processors) that might sooner reveal whether the attached RAM has any such inadequacy.

      • ImSpartacus
      • 9 years ago

      Will anybody else steal mah one-eyed pirate smileys?

      .)

        • stdRaichu
        • 9 years ago

        It just looks like sideboob.

          • Trymor
          • 9 years ago

          (\ /) <————————— ?
          \ /
          / \
          | | |
          \ | /
          | | |
          _| | |_

            • Trymor
            • 9 years ago

            Wow, guess I suck at ascii… lol.

    • djgandy
    • 9 years ago

    A correction for this paragraph…
    “Just ten milliseconds separate our four configs. The low-latency DDR3-1333 config fares the best here, while the 2133MHz scores the worst. Those results suggest that latency is a more important factor than frequency, but the scores are really too close to call.”

    2133MHz has the lowest latencies too when expressed in time. So with lower latency higher bandwidth it came lower.

      • Dissonance
      • 9 years ago

      Clarified. I was referring to latency timings rather than actual access latencies.

    • NeelyCam
    • 9 years ago

    It would’ve been interesting to see the gaming benchmarks on the IGP with different memory speeds/latencies… of course, it would require lower resolutions/settings, but that’s fine.

    I would think the memory speed is more of a bottleneck for the IGP than for a discrete card with dedicated GDDR memory

      • Buzzard44
      • 9 years ago

      I just want to see Sandy Bridge IGP benchmarks period. Maybe I missed them in the initial review, but I didn’t see them. I thought that was a big deal with Sandy Bridge, having a GPU on the die, but nobody seems to cover it very well.

        • Trymor
        • 9 years ago

        [url<]http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/39555-intel-sandy-bridge-core-i5-2500k-core-i7-2600k-processors-review-19.html[/url<] Didn't read it, but if that doesn't work: [url<]http://lmgtfy.com/?q=Sandy+Bridge+IGP+benchmarks[/url<] might give you some answers.

        • UberGerbil
        • 9 years ago

        [url=http://www.anandtech.com/show/4083/the-sandy-bridge-review-intel-core-i7-2600k-i5-2500k-core-i3-2100-tested/11<]http://www.anandtech.com/show/4083/the-sandy-bridge-review-intel-core-i7-2600k-i5-2500k-core-i3-2100-tested/11[/url<] It'll be a much bigger deal for mobile products where it may be your only choice. For desktops, either the previous IGP was good enough for what you were using it for (web/email/office) or you bought a discrete GPU. That's still the case with Sandy Bridge. Either the IGP is "fast enough" and therefore doesn't matter, or it's not fast enough and therefore doesn't matter. The SB is indeed faster, but sufficient is sufficient in the first case, and still not in the second. The one new/interesting thing the SB IGP brings to the table is the much-improved video decoding circuitry (for the formats it supports). This makes a low-end SB a much more interesting choice for a cool'n'quiet HTPC (though the new [url=http://www.anandtech.com/show/4134/the-brazos-review-amds-e350-supplants-ion-for-miniitx<]ITX Brazos boards[/url<] give it a great run for the money)

          • Trymor
          • 9 years ago

          Did you forget/miss that SB IGP doesn’t support 23.976 fps? Or is there something else I missed?

            • UberGerbil
            • 9 years ago

            Forgot / missed, but mostly didn’t care. I agree that will be an important consideration for some HTPC buyers, however.

            • Trymor
            • 9 years ago

            “Forgot / missed, BUT I MOSTLY DID NOT CARE. I agree that will be an important consideration for some HTPC buyers, however.”

            lol. human ego.

            • Flying Fox
            • 9 years ago

            Linky?

            • Trymor
            • 9 years ago

            Don’t remember the link, but it has been discussed B4 in earlier article comments.

            • ImSpartacus
            • 9 years ago

            That only matters to hardcore HTPC users. AT has made an enormous stink about it, but only a select few users will even care. And besides, if you care THAT much, you already have the budget to buy a discrete card.

            • Trymor
            • 9 years ago

            “has made an enormous stink about it, but only a select few users will even care.” – Isn’t that what happens in a lot of the discussions?

            • Trymor
            • 9 years ago

            Movies sent to (my TVs and PC’s at least) play smoothest at 23.976 fps. Movies trans-coded to other rates judder and stumble.. I don’t believe I can easily convey this with words, but the lack of that mode was a large over-site by Intel. Like lack of ‘proper’ resolutions from Intel’s graphics, missing features, blah, blah.

            The ‘average consumer’ may not notice timing imperfections, but a lot of people who know how to use a PC as a HTPC very well could.

            I would have to see for myself, of course, but I would not buy SB specifically for an HTPC until I checked out avforums, or something that investigates that part of the chip.

            • Trymor
            • 9 years ago

            Hate got me the -1 score. Uber even responded and admitted to it. Keep ’em coming…

            (it being his over-site)

          • insulin_junkie72
          • 9 years ago

          [quote<]This makes a low-end SB a much more interesting choice for a cool'n'quiet HTPC[/quote<] QuickSync is shockingly good, having finally started to play a round a little bit with it using the demo of MediaConverter. Part of the problem trying to test it so far, though, is that the programs that have implemented it so far are geared towards less technical users, so not a ton of custom settings. MainConcept has a higher-end encoder that lets you fiddle with more of the settings, so it appears the Intel SDK isn't that limiting, so hopefully as time passes, the freeware/sanely priced apps follow suit. Still, for shrinking down OTA recordings or something, it was surprisingly good, didn't use much power, and quite fast. Considering the 2000 IGP doesn't seem to lose *too* much to the 3000 IGP in QuickSync performance, even an i3 in a small case would work out fairly well.

            • Trymor
            • 9 years ago

            Trans-coding in-chip is the big deal right? Does output strait to the TV from a 3rd-party video player use anything special on SB?

      • mczak
      • 9 years ago

      I’d second that – except that unless I remember that wrong, you can’t overclock the memory on H67 neither… I would definitely expect it to make a difference.

    • JAMF
    • 9 years ago

    I would be very interested how much difference these would have in an overclocked system.

    • PrincipalSkinner
    • 9 years ago

    What CPU cooler was used? Looks like a monster on that pic.

    • bcronce
    • 9 years ago

    Multi core CPUs with large amounts of cache and advanced pre-fetching can easily hide any slow down caused by memory access.

    Why pay 2x’s the amount for memory that runs much hotter and consumes more power? I would rather invent into a beast PSU, get some decent memory and OC the cpu. Memory speeds just don’t help much any more.

      • Trymor
      • 9 years ago

      and why would any of that require a ‘beast’ PSU?

        • bcronce
        • 9 years ago

        Because most high end PSU’s are a good investment that may be good for several generations of CPU/Motherboards along with better efficiencies.

        If instead of paying $400 for high end memory, I could pay $200 for the memory and get a $200 PSU with 93% efficiency(Seasonic x850) that will probably last a good 6-8 years. I doubt you’d get that type of return from memory that gives a 0%-3% performance increase and eats up more power and runs hotter.

          • NeelyCam
          • 9 years ago

          You didn’t answer his question.

            • flip-mode
            • 9 years ago

            I’d say the question was not optimally phrased, specifically the use of the word “require”, which is what I believe you are alluding to. The question would have been more contextually appropriate, IMO, if it had been phrased like so: “and why would a ‘beast’ PSU be more useful than ‘beast’ RAM?”

            So, I think bcronce answered the question that should have been asked.

            If we want to get into questions of what is ‘required’, we’ll be headed for nudist colonies and stone-age technology. Questions of usefulness are, here, more suitable than questions of necessity.

            • Trymor
            • 9 years ago

            …but he still didn’t answer my question.

            • bcronce
            • 9 years ago

            Your question had nothing to do with my post. I responded with what you question should have been.

            Like the above person said, I posted nothing about requirements. So your question would’ve changed into “Why would you rather have a beast PSU?”

            • Trymor
            • 9 years ago

            “I responded with what you question should have been.” – OH THE EGO OF IT!

            “I would rather invent into a beast PSU” – are these not your words?

            I had a question, and I asked it. Do you always put words into ppls mouths? Waa. Deal with it.

            • flip-mode
            • 9 years ago

            He responded to it. Perhaps his response was generous, given the possibility that the question was kinda crummy to begin with.

            • Trymor
            • 9 years ago

            The only dumb question is the question that is not asked. Your ‘good buddy’ perhaps?

            • Trymor
            • 9 years ago

            It was a simple question, and since I got +3 I assume more ppl than not understood it…

            • Trymor
            • 9 years ago

            “He responded to it.” – BUT HE STILL DID NOT ANSWER IT.

            • Trymor
            • 9 years ago

            Define Crummy. Care to keep going?

            • Trymor
            • 9 years ago

            “we’ll be headed for nudist colonies and stone-age technology.” – sounds good to me. We are pretty messed up as a race.

          • Trymor
          • 9 years ago

          Seems to me, that the MB/CPU/Vidcards, SSDs replacing spinning drives, etc… are all getting more power efficient, so over-buying size-wise doesn’t seem like the best place to put the money, at least to my logic.

          • Trymor
          • 9 years ago

          Oh yeah, and how do you know we will be using the same power connectors in 6-8 years?

            • bcronce
            • 9 years ago

            Every PSU since the beginning of time has supplied 3.3/5/12 All you need are adaptors. Some of the old PSUs didn’t supply enough 12v power, but all high end current PSUs can put out the same power on 3.3v, 5v or 12v. Their power capacity is no longer an issue, it’s just standardized plugs that change.

            • Trymor
            • 9 years ago

            Like I said, how do you know the connectors will be the same in 6-8 years? Not to mention requirements? <—that was stupid, That is smart—->The past is not a good indicator of the future these days…

            And with power efficiency and ‘green’ movements happening now, how do you know you will need all that power in 6-8 years. Not to mention, buy then we may be…etc.

            Waa. Deal with it.

            • Bensam123
            • 9 years ago

            How do you know the world wont end tommorow and power supplies wont matter?!?!?

            I would reply with an actual answer, but you aren’t looking for one. All your questions are rhetorical and never meant to be answered because you don’t care what’s posted after them.

            • Trymor
            • 9 years ago

            “How do you know the world wont end tomorrow and power supplies wont matter?!?!?” – Thats my point. Lame excuse for buying a ‘beast’ PSU. Is there any logical explanation about why he would want/need/require a beast PSU?

            I WANNA KNOW. I have never bought more than a 400 watt power supply, mostly cuz I’m cheap. I am running a Q6600, Nvidia 260GTX (I believe thats correct), 5 spinning HD’s, etc.

            Your post has nothing to do with anything.

          • Bensam123
          • 9 years ago

          How is what he said not an answer?

          A investment is an answer. What a bunch of inconsiderate douchebags. A better question is “Why ask a question if you don’t want an answer?”.

            • Trymor
            • 9 years ago

            “”Why ask a question if you don’t want an answer?” – Thats the point. I wanted an answer.

            • Trymor
            • 9 years ago

            “What a bunch of inconsiderate douchebags.” – Calling ppl names and judging them is better? Get off your high-horse 😉

            • Trymor
            • 9 years ago

            “and why would any of that require a ‘beast’ PSU?” – notice the word REQUIRE? None of the answers posted prior answer that question.

            • Bensam123
            • 9 years ago

            Wow, just wow… you’re like a ultra spaz and a douchebag combined. You glanced over the part where he ANSWERED your question and where I reiterated his answer and only replied to cherry picked portions of my post taken out of context.

            If you don’t understand the premise of someones post, don’t quote it let alone reply to it. My question was rhetorical to you asking a question, but never looking for the answer even after he gave one. I said you didn’t care, because, well you still don’t and you never apparently did.

            Someone should check out this guys IPs too. It’s curious how I get rated down to -2 and all his posts ‘magically’ gain +1. Something tells me someone is using more then one account to attempt to make themselves look good, for what little good that does.

            • Trymor
            • 9 years ago

            Aww, did I hurt your feelings Mr. name-caller? – Is that all it takes to make you feel better about yourself?

            “You glanced over the part where he ANSWERED your question and where I reiterated his answer and only replied to cherry picked portions of my post taken out of context.” – Nope. I have read every comment multiple times.

            Can you tell me why (or why not) a beast power supply would be required?

            “If you don’t understand the premise of someones post, don’t quote it let alone reply to it.” – Don’t tell me what to do 😉 I am a free human being.

            • Trymor
            • 9 years ago

            Oh yeah, my I.P. hasn’t changed for anything. Looks like you are looking for a scapegoat perhaps?

            • Trymor
            • 9 years ago

            And BTW, I am not good, and if you saw me in person, you would know I don’t ‘look good’. You?

            • Trymor
            • 9 years ago

            “It’s curious how I get rated down to -2 and all his posts ‘magically’ gain +1. Something tells me someone is using more then one account to attempt to make themselves look good, for what little good that does.” – Paranoid, or just can’t take rejection?

      • Trymor
      • 9 years ago

      “I did just that. Purchased some 1600 memory for my 1066 i7-920 because Ivy Bridge will use 1600.”

      “2x 1333mhz for the quad and dual core
      3x 1600mhz for the hexa and octo core”

      “There’s another thread here with some benchmarks of a 2600K running 5Ghz – it is using DDR3-1333 at 1.5V.” – [url<]http://hardforum.com/showthread.php?t=1561738[/url<] - none of that is mine, but you might be interested.

      • UberGerbil
      • 9 years ago

      How do I hide this entire thread?

        • Trymor
        • 9 years ago

        By kicking me off the interwebs 😉

        • Trymor
        • 9 years ago

        Whatsthematter?

          • flip-mode
          • 9 years ago

          You need to tryless.

            • Meadows
            • 9 years ago

            Flawless.

            • Trymor
            • 9 years ago

            I agree. Perfect… but I’ve heard that B4, and I never learn… heh.

            • Trymor
            • 9 years ago

            I would if he would have just answered a simple question. Apparently he couldn’t because he realized he was wrong?

        • Bensam123
        • 9 years ago

        This is why petty people are bad. They lose the initial argument and just try to pick apart tiny little pieces they think will somehow make the persons over all argument invalid. Sadly a lot of people get sucked into this, look further up.

        Putting all of this into context Bcs original argument was just that it was a bad choice spending extra money on more memory; rather it should be spent on other things which are more helpful in the long run.

        And I agree with that.

          • Trymor
          • 9 years ago

          What did I lose? I asked a simple fucking questin, and everyone got bent out of shape :/

          • Trymor
          • 9 years ago

          “Putting all of this into context Bcs original argument was just that it was a bad choice spending extra money on more memory; rather it should be spent on other things which are more helpful in the long run.

          And I agree with that.” – Me too. But I wanted to know why a ‘beast’ power supply would be needed. Optimum efficiency, what?

          • Trymor
          • 9 years ago

          This is why JUDGMENTAL people are bad.

          • Trymor
          • 9 years ago

          “spending extra money on more memory” – Wrong again. (aren’t you getting sick of being wrong yet?) – it was about spending extra money on FASTER memory.

      • Trymor
      • 9 years ago

      Looks like I pissed a few ppl off…lol.

        • Bensam123
        • 9 years ago

        lawl, making people angry is funny… lolroflbbqcopter

        Kids grow up some day, sadly a lot of adults never do.

          • Trymor
          • 9 years ago

          Thank humans! Growing up is boring as hell…

          Besides, that wasn’t my goal. I didn’t know enough about PSU efficiency. All I wanted was an answer to my question, then everyone started putting words in my mouth.

          Then there was this: “Seems to me, that the MB/CPU/Vidcards, SSDs replacing spinning drives, etc… are all getting more power efficient, so over-buying size-wise doesn’t seem like the best place to put the money, at least to my logic.”

          And nobody answered that. That is in line with “why does any of that require a beast of a PSU.

          Nice implications there without specifically saying what you mean. Grow some balls.

          • Trymor
          • 9 years ago

          Calling ppl names and making fun of them is grown up?

    • merkill
    • 9 years ago

    The best reson to use faster mem is if you plan on using them in another build later or upgrading ,
    after all ivy bridge will use ddr3 1600 & bulldozer will use faster mem than that.

    So i think the extra cash on ddr3-1600 is worth it for that reson alone but for those not planing on upgrading ddr3-1333 is perfect.

      • bcronce
      • 9 years ago

      I did just that. Purchased some 1600 memory for my 1066 i7-920 because Ivy Bridge will use 1600.

        • Vaughn
        • 9 years ago

        You are going to run into a problem tho. Sandy bridge currenly wants 1.5v DDR 3 memory and alot of people say you have to be careful with 1.65v memory on those systems. Ivy bridge might have the same requirement or lower. You may not be able to reuse that memory.

        I’m on a 920 D0 system as well with muskin memory @ 1.65v so i’m keeping an eye on it.

          • ImSpartacus
          • 9 years ago

          How could Ivy Bridge require <1.5v ram? Isn’t that outside the DDR3 spec?

          Newegg has 15 1.35v kits.

          Hell, one nice 1600MHz 2x2GB OCZ kit is going for $34.99 AR/shipped.

          [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16820227653[/url<] Only one rebate per address, so if you got three kits (6x2GB) to fit a triple channel system, it would be 45+45+35. $125 for 12GB of future-proofish memory might be worth it. You would probably have to run at 1333MHz, but it would still be ok.

            • NeelyCam
            • 9 years ago

            [quote<]How could Ivy Bridge require <1.5v ram? Isn't that outside the DDR3 spec?[/quote<] Yes. Since Ivy Bridge supports DDR3, it has to handle 1.5V+10%=1.65V. That said, bcronce has an i7-920 based system, and those memory controllers support DDR3 overvolting for higher performance... if his memory is 1.8-2.1V, he's SOL - he'd have to run it at lower voltages/settings to have it working with SB/IB

    • anotherengineer
    • 9 years ago

    “Everyone else can rest assured that using relatively inexpensive DDR3-1333 memory won’t cost them much performance in the real world”

    I would call 2% and less zero anyway, so ddr3-1333 won’t cost any real world performance difference.
    (or negligable)

    I heard Bulldozer is supposed to support ddr3-1866??? That true?

      • stdRaichu
      • 9 years ago

      IIRC highs-speed memory has been making negligible difference to overall speed ever since memory controllers went on to the CPU; Anand did a review where they compared a bunch of uber-RAM on newly released A64 939 and 754 systems and found little to no difference between them apart from in synthetic tests.

      [url<]http://www.anandtech.com/show/1494[/url<] I've been using bottom-end crucial or corsair stuff since the C2D came out (which had a boatload of cache and memory readahead to make up for the fact the controller wasn't onboard yet).

        • willmore
        • 9 years ago

        My main benchmark is Prime95. It’s also my main application. It loves memory bandwidth. I’d love to see some testing with different memory speeds using it. On my OCed Q6600 (400×8), a see a big difference between running the memory (DDR2) at 800 and 1066.

        I’ve got two sticks of 1600C9 DDR3 sitting here waiting for fixed Sandy Bridge motherboards to get made. I’m glad to see that it’s going to be sufficiently fast to keep a 2500K fed for most apps.

        • swaaye
        • 9 years ago

        The RAM latency has been steadily increasing with each memory bandwidth bump and since CPUs are quite latency sensitive this reduces the benefit too.

        It’s the same with video card GDDR5 but graphics isn’t as sensitive to latency. Apparently it’s does impact GPGPU though. Cayman article over at RealWorldTech talks a little about that.

    • Crayon Shin Chan
    • 9 years ago

    Not really surprising with recent CPUs. Skipped straight to the conclusion, saved myself the time too.

    • flip-mode
    • 9 years ago

    Wow, you could plumb a line with those graphs. Thanks for the article Geoff. I’m still clinging to DDR2. If Bulldozer ends up competitive then I might contemplate moving the X4 955 to an AM3+ / DDR3 platform with the expectation of moving to Bulldozer later on. Otherwise, I suppose I’ll be sitting tight on DDR2 until I need a new CPU, at which point, DDR4 might be out! But if / when I do go DDR3, this article just reinforces what has been the case since RAM got faster than DDR2 800 – RAM speed increases beyond that aren’t anything to salivate over.

      • Trymor
      • 9 years ago

      Weren’t the first DDR3 modules slower than the equivilent speed DDR2 chips, just like the move to 2 from 1?

      I buy cheap stock sticks for my rig, until good stuff gets cheaper, then I double the ram and overclock my CPU from 2.4 to 3ghz. Done it 3 times now, an I am still using a Q6600. I love the SB stuff, but I will probably be using RAMBUS by my next upgrade 😉

        • Flying Fox
        • 9 years ago

        That is because of latency was too high for the faster clock to compensate.

          • Trymor
          • 9 years ago

          Ah yes. Should we buy the same capacity for half the price and a 1 year warranty, or pay twice as much for ‘good’ ram with a 2-year warranty (just for comparisons sake.)?

          Comes out even, but when the cheap stuff dies, you buy faster cheap stuff. When 2-5 year warranty stuff dies, you are stuck with the the ‘old’ stuff.

          Decisions, decisions…

    • Trymor
    • 9 years ago

    “or to compensate for personal shortcomings,” – LOL

    • rxc6
    • 9 years ago

    Here just to mess with ssk.

      • arklab
      • 9 years ago

      How about fast ram (both timings and latency) helping other parts of the PC?

      For those who aren’t building a gaming rig (say a HTPC), is there any advantage in the other sub systems?
      The transfer of data to hard drives? (I need eight streams of high def. video from tuners to storage drives)

      Any help for USB (ether 2.0 or 3.0) as I have so many keyboards, mice, remote controls,etc. that things seem to get bottle necked here.

      Would general overclocking be some help, too?

        • UberGerbil
        • 9 years ago

        HDs and USB? Uh, no. Look at those graphs on the first page: even the slowest memory here is moving 18GiB/s and showing a latency of 44ns. Your [url=https://techreport.com/articles.x/18712/4<]fastest HDs[/url<] sustain about 150MiB/s and have latency measured in the the milliseconds; even SSDs (or HDs reading entirely out of cache) only burst at a couple of hundred MiB. In other words, the slowest memory here is on the order of a hundred times faster than your secondary storage. When transferring data to/from the HD, the RAM is spending most of its time twiddling its thumbs, waiting. Likewise, USB 3.0 is supposed to offer about 400MB/s in the real-world (after protocol overhead and encoding). That's faster than any consumer SSD today, but it's still 40 or 50 times slower than the slowest memory tested here. Again, your RAM is twiddling its thumbs. Worrying about your memory speed as a factor when considering any of these other, much slower subsystems is a bit like me sending you a paper letter via snailmail and asking you to email me when you get it, and then worrying about the speed of the email as a factor in the total time for this operation. You may be getting bottlenecked, but it's not by your RAM.

          • stdRaichu
          • 9 years ago

          UberGerbil, you clearly have no appreciation for epeen. If it’s more expensive it stands to reason it must make things faster, even if a pleb like yourself couldn’t appreciate the difference. Personally, my internal chronometer is sensitive enough that I can tell when my 80th mem bench of the day finishes 0.003% faster (that’s a saving of 49s a year folks!). I bet you don’t even have oxygen-free glass in your optical cables.

            • arklab
            • 9 years ago

            Yes, I’m well aware of the basic transfer speeds of HD’s & USB, but the supporting subsystems and caching of Windows may be involved – and benefit from faster ram.

            The sarcasm is always appreciated though, stdRaichu.

            • stdRaichu
            • 9 years ago

            Trust me, I’ve done plenty of fiddling with RAM in order to eke out even the most minuscule performance increase from it (esp. back when I was doing x264 on my X2 4200)… and always failed. Pretty much everything these days is bottlnecked on either execution (CPU) or filesystem I/O; shuttling things in and out of memory has pretty much been a non-factor for at least five years. I could get a barely noticeable improvement out of winrar when using premium RAM, but even that was about 2% – i.e. negligible.

            Far better to take the hundred you save by buying bog-standard RAM and buy a couple of nice bottles of whisky. Then as you slip into that blissful drunken miasma that only a single malt can give you will give the age old cry of “I’m a shensible sopper when it comes to baying bulanced compoter compunents!” and all will be right with the world.

Pin It on Pinterest

Share This