A Bridge too far: migrating from Sandy to Kaby Lake

After nearly six years and countless posts about how my i7-2600K was still good enough, I decided that I’d had enough of good enough when we published our Core i7-7700K review. It was time to upgrade my PC, and I recently completed my new build. I can hear the palms contacting faces already. “Fish, you idiot, Ryzen is almost here! You should have waited.” That could be, but I won’t be buffaloed into second-guessing my decision. As it happens, I’m quite pleased with the results and I’m pretty confident that Ryzen couldn’t do any better.

Now, I don’t have any insider info about AMD’s upcoming chips. Pulling the trigger on my upgrade was influenced by the same speculation and rumors that anyone reading TR would consider. What I did know was that that the games I want to play need every last bit of single-threaded CPU power they can get. Simply put, my guess is that when it comes to clock speed and IPC, Kaby Lake is going to beat Ryzen handily.

To succeed in DayZ you need the right gear: a lot of apples and a lot of single-threaded CPU performance.

First off, it’s good to know where I’m coming from. My old rig (now paired with a Radeon RX 470 for dedicated The Sims 4 duty on the TV) was an i7-2600K mildly overclocked to 4.2 GHz and sporting 16GB of DDR3-2133 RAM. That Sandy Bridge build served as my main PC for probably twice as long as any of my previous rigs. It’s still pretty respectable, but seeing these results in our i7-7700K review pushed me over the edge. The only thing I kept from my old PC was my EVGA GTX 980 Ti Hybrid which, if I behave myself, still has over a year of service ahead of it to reach my personal every-three-year GPU upgrade threshold.

Here’s the complete list of everything that went into my upgrade:

Asus Prime Z270-A

Much of my parts list should be self-explanatory. Many of the items therein are TR favorites, while others are at least generally recognized as quality stuff. Of course, there’s some personal preference involved in my choices, but I doubt many gerbils would complain if their own PCs suddenly contained identical hardware to the stuff I bought. One unknown for me was the Asus Prime Z270-A. I purchased it the day it went up for sale with nary a review to be found online. I usually buy motherboards in the $150 range, so it fit the bill there. It also seemed to have a solid combination of features for the price, so I took a chance and ordered one up.

That was over a month ago, and so far I have no major complaints with the Z270-A. I mean, it would be nice if it had a few more USB ports on the rear panel, but I knew what I was getting into when I bought it, so it’s hard to judge it for that. It does have RGB LEDs, but they behave themselves. By default, the LEDs light up white for a while when you power on the PC, then they respectfully turn off after the boot process is complete. The only time you’ll otherwise see them is if you futz with them in Asus’ software. Maybe I’ll mess with the colors on some rainy day in the future, but the lighting doesn’t matter to me in the slightest. My computer is behind my monitor on a corner desk, so I’ll never see inside it anyway despite the fact that the Corsair Obsidian 450D case I chose does have a window.

One minor bummer about this board is that I can’t run my fancy G.Skill Trident Z 3866 MT/s memory at full speed in it yet. G.Skill doesn’t list the board on its official compatibility list, but I was hopeful the pairing would work out anyway. I am able to run the memory at 3733 MT/s with the same XMP-dictated timings as the slightly-faster speed would have, so I’m not exactly starved for memory bandwidth. Of course, there’s more than one factor that could be to blame for this issue, and it might not be the board’s fault. Regardless, I’m hoping for a BIOS update that will let me run the RAM at full speed.

The only other nitpick I have with the Z270-A is that the teeny-tiny standoffs for the pair of M.2 slots aren’t pre-installed. I haven’t built a lot of systems with M.2 drives in them yet, but I know the NUCs I’ve dabbled with had standoffs built-in. When I first plugged in and tightened down my 960 EVO, I realized that something didn’t look quite right, and then I found the bag with the nearly invisible standoffs in it. Thankfully, nothing was damaged, but it’s something to watch for.

The small stuff aside, the Z270-A is a great board: simple but comprehensive, in the vein of the Z97-A and Z170-A. It’s got an Intel NIC, SLI support, USB 3.1, and great fan controls, all at a reasonable price. In fact, we recently recommended this board in our February system guide for those very reasons. If you’ve used or read a recent review of other Asus boards, you more or less already know what to expect from the UEFI. It’s standard Asus goodness all around. The AVX offset feature that arrived with Kaby Lake, which automatically reduces the CPU multiplier for AVX workloads so you can run a swift overclock most of the time and still maintain stability when AVX comes into play, is especially handy to have.

Β 

i7-7700K vs. i7-2600K

Surely you’ve already read our i7-7700K review, but if not, it would be a good idea to catch up before reading further. The details below aren’t anywhere near as comprehensive as our standard suite of testing. I’ll be talking about the results of just one benchmark and my own personal pair of chips. I’m going for real-world results, as one might compare their own two builds post-upgrade. We’re talking about completely different systems and even slightly different driver versions here. The only things in common are the video card, the operating system (64-bit Windows 10, of course), and the benchmark itself.

Before I get all excited about the actual benchmark details, I’m going to bust out a dirty ten-letter word: subjective. That’s not generally how we roll at TR, so I’ll avoid claiming anything definitively without the data to back it up. That said, the “seems faster” factor is real, even though I wasn’t expecting it to be noticeable. It’s there, even just in normal day-to-day use.

I’m not alone in noticing this, either. Our resident code monkey, Bruno Ferreira, jumped from an i5-2500K to an i7-6700K last year and spammed the TR Slack channel about how awesome it was. That happened even without the hot-clocked memory and NVMe SSD in my new build. Perhaps the most surprising boost is how the lowly gem Rimworld benefits from my new hardware. My colonists can now raise substantially larger Emu armies before frame rates suffer. Excellent.

Bohemia Interactive’s DayZ is where I spend most of my gaming time, however, and its performance can be approximated well with Yet Another Arma Benchmark, an AI-heavy scenario for Arma III made by community member Greenfist. This is the same benchmark that Jeff used to tease performance differences from various memory speeds in his i7-7700K review. That brings us back to my feelings about Ryzen. I suspect Kaby Lake will have a significant leg up on AMD’s latest when it comes to the games that I’m looking for better performance in. Let’s see what my upgrade bought me in Arma III and DayZ.

After a couple hours of 100% load w/ Prime95 (non-AVX)

For this test, all of Arma‘s graphics settings were cranked up to the max, except for FSAA. My i7-2600K was running at 4.2 GHz and my i7-7700K was running at 4.8 GHz, which is as far as I’ve pushed it so far. (Side note: I think I got a pretty good Kaby chip, and I may have won the TIM lottery, as well.) I’m using 16GB of DDR3-2133 with my i7-2600K and 16GB of DDR4-3733 with my i7-7700K. My GTX 980 Ti is the same in both cases but the SSDs, motherboards, PSUs, cases, and cooling hardware are all different. However, I don’t think that any of those are major contributors to the performance delta between the systems. On to the results.

i7-2600K @ 4.2 GHz w/ 16GB DDR3-2133

Ouch. This notoriously poorly-performing title from 2013 puts the hurt on my old system. Even at 1920×1080, it doesn’t get over 45 FPS and averages just over 30 FPS. Jeff’s unofficial testing for the i7-7700K review placed a stock i7-3770K (3.5 GHz) with DDR3-1866 and similar settings at about 35 FPS when paired with a GTX 1080, so we at least know we’re in the ballpark of what to expect. Let’s see what six years of ticking and tocking gets us.

i7-7700K @ 4.8 GHz w/ 16GB DDR4-3733

Yowza! Nearly double the average FPS from “just” a CPU upgrade? That’s not normal. Only a game engine as goofy as Arma III‘s could produce that result. I’ll take it, though. This jump in performance is a huge quality of life improvement and I can confirm that, subjectively, it carries over to DayZ, as well. What’s that, you say? None of that matters because 1920×1080 is for chumps and if I was playing at a proper resolution the CPU wouldn’t matter nearly as much? Well, let’s check out one more result, then: this time at 3440×1440.

This is so wrong…

You’re looking at just a 2.5-FPS drop after an approximately 2.5x increase in the number of pixels being pushed around. There’s a saying for this sort of thing: That’s Arma! I guess my GTX 980 Ti isn’t out of a job yet. In DayZ, the results have always been similar. In the game’s worst-performing areas, it didn’t matter if you ran 1080p with everything on low or at 4K with AA on high, your frame rates were going to be in the low-teens no matter what, and your GPU wouldn’t even break a sweat.

These results are relevant to my upgrade story because as recently as early last summer, before the DX11 renderer and version 0.60 of the game were released, my trusty i7-2600K and GTX 980 Ti combo was nigh-unplayable in many areas of the map. The 0.60 update would ultimately double or triple the frame rate in the most troublesome areas, and my new PC is frequently doubling those numbers yet again. All with the same graphics card, and all while Bohemia Interactive is improving the look of the game with dynamic shadows and lighting. Cumulatively, it’s completely transformative. I didn’t know if I would ever see the day when DayZ ran buttery smooth. Such a thing was practically unachievable just eight months ago, no matter what hardware you had or how much money you spent.

Happy dance!

Conclusions

Make no mistake: I understand that my experience is clearly a niche case. Most games aren’t as CPU-limited as DayZ and Arma III. However, the lesson I learned after chasing perfect DayZ performance for years is that sometimes we just need CPUs to get things done now. In those cases, there’s no substitute for clock speed, IPC, and, apparently, even memory bandwidth and latency. I was guilty of underestimating the importance of the CPU in the overall system, especially since I’ve always gamed at above-average resolutions where conventional wisdom suggests the CPU matters even less.

All told, I stretched my Sandy Bridge system too far. I could have experienced most of the improvements I’ve seen in the last month earlier if I had jumped on Skylake when it came out. Then again, Z270 seems to be better-suited to super-fast memory than Z170, so maybe waiting was the right choice. What about Ryzen? Well, we’ll just have to see how that pans out. I don’t think I’ll regret my choice, but I geared my upgrade toward a specific task. You win some benchmarks, you lose some. Other builds will be better suited for other needs. All told, though, I’m happy with my decision, and if you’re wondering whether it’s time to upgrade your own Sandy system, the answer is probably “yes.”

Colton Westrate

I post Shortbread, I host BBQs, I tell stories, and I strive to keep folks happy.

Comments closed
    • ramon zarat
    • 3 years ago

    I’ll keep my 2500K at 4.7GHz a while longer, thank you very much. Nice little bump with the 7700K, but still way subpar from my own perspective. People seems to forget what true improvement between generations of CPU used to mean and now settle for a mere 20-50% improvement as adequate.

    Going from a 386 DX25 in 1985 to a 486 DX50 4 years later in 89 with over 100% better performance. 5 years later in 1994, still 100% improvement with a DX4 100. Another 4 years, we got a Pentium II at 400Mhz in 1998 with well over 300% better performance over that DX4 100. Now, THAT was an improvement ( [url<]http://www.intel.com/pressroom/kits/quickrefyr.htm[/url<] ) I still don't feel any pain using my 2500K. 20 more seconds on my 7zip compression or 5 more minutes for my video conversion or gaming at 107 FPS instead of 150 are all unnoticeable. I'm still much faster than the vast majority of PCs out there. I won't upgrade until a new CPU comes along that perform at least twice as fast in every scenarios, not 30% on average...

      • jihadjoe
      • 3 years ago

      Those were different times. Product cycles were a lot longer, a lot of the computing tricks that make CPUs so fast today hadn’t yet been discovered, and heat/power limits were as far away as the horizon.

      The 386s didn’t even need heatsinks, and the 486 and early Pentiums got by with just putting them on top of the CPU without any thermal compound of any sort. TDP wasn’t even part of the specs until the Pentium II.

    • JuniperLE
    • 3 years ago

    I’m very curious to see how Ryzen performs in this thing, I mean Ryzen is probably slower but at the same time it will also run with very fast DDR4.

    also this must be a nightmare on Bulldozer.

      • chuckula
      • 3 years ago

      [quote<]I mean Ryzen is probably slower [/quote<] NO!!! [quote<]also this must be a nightmare on Bulldozer.[/quote<] I think you're allowed to admit that Bulldozer wasn't all that now. Especially with this [url=https://twitter.com/GFXChipTweeter/status/833074477713928193<]tweet from Raj.[/url<]

        • ImSpartacus
        • 3 years ago

        That tweet is savage af. Raja is an animal.

    • atari030
    • 3 years ago

    The only basis on which I’d say it was a mistake not to wait for Ryzen to come to market first before moving forward with this upgrade is that I’d expect the appearance of Ryzen to impact either Intel’s CPU pricing, or lineup, or both. It’s possible some good money could have been saved by simply waiting another couple months.

    • drfish
    • 3 years ago

    For science!

    i7-7700K @ 4.8 w/ 16GB DDR4-3733 – 56.5 (dual channel)
    Stock i7-7700K w/ 8GB DDR4-3733 – 47.4 (single channel)

    [i<]Edit: I apologize, I checked my settings after posting this and realized I had left the multiplier to Auto. So my CPU likely turbo'd up to 4.5 GHz for the single channel test.[/i<]

    • ozzuneoj
    • 3 years ago

    I think it’d be really interesting to see how a $60 Pentium G4560 compares to a Sandy Bridge i5 (overclocked and at stock) in these similar tests. Do these massive improvements that some benchmarks show going from Sandy\Ivy\Haswell to Kaby Lake still occur with the lower end chips?

    I mention this because the used PC market was flooded with older i5-based workstations over the past several months (seems they’re a bit more scarce now), and they are a fantastic value. The G4560 is a nice value option for a new system and new platform, but doesn’t totally dominate the older i5’s, since it still costs quite a bit more to build from scratch. If it did give that massive “Kaby Lake” boost in these situations, that could really push things in their favor.

    Someone should send Colton a G4560 and a 2500K and have him run some similar benchmarks with that. πŸ™‚

      • MOSFET
      • 3 years ago

      I totally agree with this, as another interesting experiment. I do not have either of the mentioned chips though.

    • liamtech
    • 3 years ago

    Excellent story. This computer was an updated build and tested for real user use. When I update my Sandy Bridge to the new AMD cpu or 7700k I’ll base it on BF1 testing and games that I play only. I don’t care about rendering time or how the CPU does in Ashes of the Singularity, as I don’t render or play that game.

    Colton saw an improvement for his needs, so this upgrade was a major success in my opinion.

      • anotherengineer
      • 3 years ago

      “Colton saw an improvement for his needs, so this upgrade was a major success in my opinion.”

      Last I checked games were typically wants, not needs πŸ˜‰

      But if he got what he wanted from his money, then yes.

    • drfish
    • 3 years ago

    Here are some additional results:

    i7-7700K @ 4.8 w/ DDR4-3733 – 56.5
    i7-7700K @ 4.8 w/ DDR4-3200 – 49.6
    i7-7700K @ 4.5 w/ DDR4-3200 – 48.8
    i7-7700K @ 4.2 w/ DDR4-3733 – 48.1
    i7-7700K @ 4.8 w/ DDR4-2400 – 46.4
    i7-7700K @ 4.2 w/ DDR4-3200 – 45.5
    i7-7700K @ 4.5 w/ DDR4-2400 – 43.9
    i7-7700K @ 4.2 w/ DDR4-2400 – 40.2
    i7-2600K @ 4.2 w/ DDR3-2133 – 31.0

      • Firestarter
      • 3 years ago

      wow, that game must fly on a quad channel setup

        • raddude9
        • 3 years ago

        Or perhaps a CPU with more cache memory, all depends on how the game was coded.

        • drfish
        • 3 years ago

        I’m curious to know. Does it like the raw bandwidth or the low latency more?

          • Firestarter
          • 3 years ago

          sounds like a good test case for Kaby Lake-X. Or you could play with the latency settings on your current test rig to find out, but I assume the real latency was roughly constant during this test (perhaps slightly decreasing with the faster settings)

          • JordanV
          • 3 years ago

          It seems to really like the 128MB EDRAM cache on the Crystalwell CPUs.

          For DRAM, there are a handful of quad-channel machines in the Steam discussion for that benchmark:

          i7-6850K @ 4.5 w/ DDR4-3200 – 44.6 (4K res)
          i7-6850K @ 4.3 w/ DDR4-2444 – 44.5
          i7-3930K @ 4.8 w/ DDR3-2133 – 40.1

          All that extra quad-channel bandwidth seems to help but not nearly like upping DRAM clocks.

      • ImSpartacus
      • 3 years ago

      Good stuff. Edit that back into the article if it’s not already there.

      • raddude9
      • 3 years ago

      So, in changing the system you saw an 82% improvement in frame rates. (Saying that DDR3-2133 ~ DDR4-2400) Changing the CPU gave approx 29% of the improvement, whereas 53% came from the increased RAM speed. Just shows how important memory speed is for some benchmarks.

      • K-L-Waster
      • 3 years ago

      Very interesting — so it appears this game isn’t so much CPU bound as it is memory bound.

      • BurntMyBacon
      • 3 years ago

      That game is almost more sensitive to RAM bandwidth than CPU speed (at least on the same architecture). Not exactly typical of most modern games selected by reviewers. I’ll need to keep that in mind.

      • Kretschmer
      • 3 years ago

      I wonder how the 5775C would compare…

        • ImSpartacus
        • 3 years ago

        I can’t wait until Intel does another crystalwell chip.

        So far, they’ve had a “shtick” to every generation:

        [list<] [*<]Most either get an architectural update (tocks) or a process update (ticks). [/*<][*<]Stuff like Haswell Refresh got better clocks and tim (and an unlocked dual core). [/*<][*<]Broadwell got crystalwell. [/*<][*<]Kaby Lake got better clocks and a process refresh (to enable those clocks, and oc headroom). [/*<][*<]Coffee Lake will bring an extra two cores. [/*<][*<]Cannon Lake will be a "tick" with its new process. [/*<][*<]After that? I'm thinking Crystalwell will have to make an appearance during one of these "optimization" years. [/*<] [/list<] At some point, Intel's bag of tricks will run dry and they will be forced to resurrect crystalwell in order to have a compelling release every year. And as they appear to be using the HEDT sockets more often (e.g. upcoming Kaby Lake-X), so maybe Crystalwell 2.0 will be small enough to fit on a package for an upcoming HEDT socket. A guy can hope!

      • jihadjoe
      • 3 years ago

      Interesting, and it seems that the chip gets more and more bandwidth-constrained the higher the clocks are (which makes sense).

      DDR4-3733 vs 3200
      @4.2 = 2.6fps gain
      @4.8 = 6.9fps gain

      • Freon
      • 3 years ago

      I’d love to see further testing in some other new AAA titles. I know some of them may boringly show little gains, but other information out there is mixed.

      • anotherengineer
      • 3 years ago

      i7-7700K @ 4.2 w/ DDR4-2400 – 40.2
      i7-2600K @ 4.2 w/ DDR3-2133 – 31.0

      So if you dropped memory to DDR4-2133 might even be a bit lower.

      Looks like memory helps lots also
      i7-7700K @ 4.8 w/ DDR4-3733 – 56.5
      i7-7700K @ 4.2 w/ DDR4-2400 – 40.2
      A larger increase there that the upgrade over the 2600k, interesting.

      Can that game use more than 4 cores?

        • drfish
        • 3 years ago

        I actually did run a 4.2 GHz test at 2133. It was 41.5 – so within the margin of error of the test.

        It does spin off tasks to other threads, but one thread does the heavy-lifting.

      • Ninjitsu
      • 2 years ago

      This is nice, thanks

    • CampinCarl
    • 3 years ago

    So, I was looking at doing something similar to this as a forum post later this Spring after I acquire a new computer (Ryzen looking promising, but if it falls on it’s face, Kaby Lake)…is there any way I could get a copy of the Qt compile benchmark and/or Robobench? Not like writing my own versions would be difficult, I just was hoping to provide data to the community that was at least somewhat comparable to what they’d be seeing from reviews written by TR Staff.

    • bfar
    • 3 years ago

    Myself, I went from a 2500k to a 6700k last year, and also picked up on a noticeable jump in performance in some but not all applications, including some games.

    I feel my timing was sound – I got a good year out of this already, and will probably get another year or two at least before I upgrade again – most likely an 8 core from either Intel or AMD.

    • RickyTick
    • 3 years ago

    Thanks for the write-up Fish. Nice job!

    I’m in the process of upgrading from an i7-950 and trying to decide on components and looking for bargains. You just helped me decide on the motherboard. Thanks again.

    • wingless
    • 3 years ago

    He just couldn’t resist all the RGB…

    • anotherengineer
    • 3 years ago

    Fish a question.

    What PCIe version was your Sandy mobo? I would assume 1.0 or 1.1, going to ver 3.0 on Kaby maybe have also helped fps a bit with the extra gpu bandwidth?

    The ssd bandwidth between mobos probably also helped loading times, etc.??

    And Ram speeds also??

    Interesting review though. It would be interesting to see if the CPUs were clocked the same and ssd bandwidth, ram speed and pcie bandwidth was cut down to match the sandy mobo, how much of a difference would be noticeable then? I mean then it would be a more closer CPU to CPU comparison then, that would be the interesting thing. Did you gain more from all the bandwidth from a new mobo, or from the CPU, or did they contribute equally??

      • DPete27
      • 3 years ago

      Come on now. Sandy Bridge was PCIe 2.0. It’s not THAT old.

        • anotherengineer
        • 3 years ago

        Jan 2011 to now is 6 years. I would say in computer hardware terms that is a long time.

        From TRs review they used this mobo
        Intel DH67BL
        [url<]http://ark.intel.com/products/50098/Intel-Desktop-Board-DH67BL[/url<] Memory was ddr3 1333 [url<]http://ark.intel.com/products/52210/Intel-Core-i5-2500K-Processor-6M-Cache-up-to-3_70-GHz?q=i5-2500k[/url<] So some things yes some no.

      • drfish
      • 3 years ago

      [url=http://www.asrock.com/mb/intel/z77%20pro4/<]This is my old board.[/url<] It has PCIe 3.0. [url=https://techreport.com/forums/viewtopic.php?f=3&t=88087#p1165890<]This post[/url<], and the one I follow it with, explains how I ended up with that board back in the day. Good times. πŸ™‚ I'll make a note to run YAAB at 4.2Ghz and lower memory speeds, will post results here later.

        • anotherengineer
        • 3 years ago

        Cool thanks for he feedback.

        • derFunkenstein
        • 3 years ago

        I’m pretty sure that while the board supported PCIe 3.0, that was just for Ivy Bridge upgrades. [url=https://techreport.com/news/21433/all-gigabyte-6-series-mobos-to-support-ivy-bridge-pcie-3-0<]This link[/url<] is kind of vague, but it seems to assume the knowledge that PCIe 3.0 was going to be new in Ivy Bridge. Even Gigabyte's PR statement suggests that 3.0 was still in the future with Intel's CPUs.

        • DPete27
        • 3 years ago

        Z77 = Ivy Bridge. Even some of the 67-series boards supported PCIe3.0, but I think you had to be running an Ivy Bridge CPU to enable PCIe3.0, no?

          • MOSFET
          • 3 years ago

          Correct. PCIe 3.0 and DDR3-1600 were unlocked with Ivy.

    • ptsant
    • 3 years ago

    To me Kaby seems like the pinnacle of the given architecture and process technology. It won’t get much better than this without a significant architectural revision or node shrink. This is also why Intel will have a hard time following the evolution of the Zen cores.

    So, I think people who bought Kaby really got a very decent system that will last a very long time. Especially for notoriously single-threaded games like Arma III, I think it’s a great choice.

    • dashbarron
    • 3 years ago

    I posted in the forums about wanting to move from my Q9450 to … something new. Coffee Lake seemed too far out…until Monday. Now I’m waiting anxiously, system has been slowly failing and falling apart for two years (9 years old in a month).

      • ImSpartacus
      • 3 years ago

      If your personal workload isn’t highly threaded (and if you’ve survived with a Q9450, then I doubt it is), then Coffee Lake might not be right for you.

      Remember that Coffee Lake is just a six core version of Kaby Lake. Normally with these kinds of refreshes, we get higher clocks. But with a 50% increase in cores, we honestly might not.

      And if your workload isn’t highly threaded, then you would benefit more from higher clocks found in Kaby Lake.

      I’d just monitor the market for excellent deals and jump on whatever gets a healthy discount. With Ryzen coming, I bet some deals are close.

      • anotherengineer
      • 3 years ago

      Still on my 955BE that I assembled in Aug. 2009. Still rock stable and running great for what I use it for. The old radeon 6850 has been legacy driver support now for over a year, but no issues.

      Doing a FIFO job now, so now much point upgrading since I am only home about 25% of the year, and of that it’s catching up with more important things to do for the other 24.8% of the time.

    • ronch
    • 3 years ago

    Given how the 2600K is about as fast as an FX-8350 in highly threaded tasks and pulls ahead in single threaded ones, does this mean I should take the leap too? But even if I could leap now, there’s nothing to catch me because Zen isn’t out yet. I’d really much rather get AMD.

    • HammerSandwich
    • 3 years ago

    [quote<] ...but the lighting doesn't matter to me in the slightest.[/quote<] Normally, I'd agree, but... [quote<] ...the LEDs light up white for a while when you power on the PC, then they respectfully turn off after the boot process is complete.[/quote<] This sounds useful - in a clever way! - for swapping parts & debugging.

    • GatoRat
    • 3 years ago

    Out of curiosity, why an 850 watt power supply?

      • drfish
      • 3 years ago

      Fair question, my old PSU was an 850w Corsair. There’s always a change that I might decide to go for SLI in the future as I’ve had SLI twice before and Crossfire once. It probably won’t happen, but I wanted a little extra juice, just in case.

    • kuttan
    • 3 years ago

    Is this the Techreport’s official Intel fanboyism ?? Colton Westrate upgraded from i7 2600k to i7 7700K just to play worst CPU optimized game Arma ?? To make it even worse what fairness in performance comparison between a 2600K running at 4.2Ghz Vs 7700k at 4.8Ghz ?? A large number of current generation games already scales past 4 CPU cores and buying a 4 core CPU again in 2017 was a stupid move. The 4 core CPU Colton Westrate bought will get obsolete much faster than 2600k that you bought 6 years ago. Every informed people in the tech world knows clearly that Intel’s Clock for Clock performance gain is negligible from Sandy Bridge generation CPU onward.

    Overall a terrible article from TechReport showing fanboyism, biased unprofessional journalism.

      • Ifalna
      • 3 years ago

      You get around 5% increase for each generation.
      Negligible if you update from G1 to G2, but easily noticeable from G1 to G7.

      I think, if I were to update from my 3570K, I would get around 25% more processor pew pew. Not enough for me to warrant it, but the improvements are definitely there.

      As for the game chosen: yeah but if you actually bothered to read the text, he clearly stated that Arma was a special case.

      As far as the “will be obsolete faster”: I don’t think TR guys mind updating a system every 3 years instead of 6. It’s what they DO afterall. ^^

      • diademz
      • 3 years ago

      Article aside, the main point is if you’re already running a K CPU and planning to use it long-term, you should overclock the hell out of it. I think this was not the writer’s priority. Sure you get nice goodies (USB 3.x, M.2, DDR4, etc.) but in games it’s all but negligible. Nevertheless, satisfying the upgrade itch is always nice if you can afford it.

      • VincentHanna
      • 3 years ago

      [quote<] To make it even worse what fairness in performance comparison between a 2600K running at 4.2Ghz Vs 7700k at 4.8Ghz ?? [/quote<] As it should be.... as in, CPUs shouldn't be running with the same clock speeds as they were half a decade ago, they should be better/faster. The only real problem that I see with this is that my 3930k (also sandybridge) runs at 4.6->4.8 and I also consider this to be a modest overclock (I've gotten my pc to be stable at 5.1ghz in the distant past) But yes, In general, this is what I expect from an [b<] upgrade, [/b<] and in general, this is the reason I tend to look at the new intel hotness and ask "why no 5ghz?" shrug and say, I guess i'll be waiting another 6 years. Oh well...

      • drfish
      • 3 years ago

      [quote<]The 4 core CPU Colton Westrate bought will get obsolete much faster than 2600k that you bought 6 years ago.[/quote<] I certainly hope so! I have to disagree with the rest of your post though. A 600 MHz clock speed difference does not a nearly doubled frame-rate make. There's a lot more going on here with IPC and memory speed that made the upgrade worth it to me. Hopefully, Jeff will do some YAAB testing with Ryzen and we'll all see if I made the right choice or not.

      • Krogoth
      • 3 years ago

      Kaby Lake has a number of under the hood improvements over Sandy Bridge and has a fatter memory bus and more cache at its disposal. The silicon is at [b<]least 25% faster[/b<] then Sandy Bridge (assuming clock speed is equal) and more if the application takes advantage of newer instruction sets in Kaby Lake architecture along with having more cache and bandwidth to play around with. Barring from a revolution in microarchitecture design or a paradigm shift in computing itself. It is unlikely Kaby Lake platform will be "obsolete" anytime soon.

      • dsirius
      • 3 years ago

      Indeed, a terrible article from TechReport. I was wander what was the author thinking. Better let’s wait 2 more weeks and see exactly.

      • ptsant
      • 3 years ago

      Although I am certainly skeptical of an upgrade 2 weeks before a major launch, the fact is that the writer did provide some very convincing benchmarks for what he does with the machine and the difference is not negligible. Plus, we all know the bells and whistles that come with a new MB are not insignificant. Point is, if he is happy with it, the upgrade is successful.

      I didn’t get the impression the writer was promoting the 7700K for anything other than Arma III and, in that sense, I felt the article was very honest.

        • derFunkenstein
        • 3 years ago

        We should also remember he didn’t build the machine yesterday. It’s not “two weeks”. Instead it’s more like “almost two months”.

      • chuckula
      • 3 years ago

      [quote<]Is this the Techreport's official Intel fanboyism ??[/quote<] You think this is bad?!?!? You should have seen what that Damage guy was like! Didn't he quit to go work at Intel?

      • albundy
      • 3 years ago

      well, the ryzen benchmarks will determine if it was a stupid move.

        • ImSpartacus
        • 3 years ago

        I feel like there’s effectively no way they Ryzen beats the single-threaded performance of Intel’s offerings.

        Let me make clear that inferior single-threaded perf won’t sink Ryzen and I fully expect that it’ll achieve AMD’s promised 40% ipc improvement (and maybe more).

        Ryzen can compete in areas where cheap threads are helpful. They will do well there.

        But gaming (especially the shitstorm of cpu optimization that is arma) doesn’t need more than four threads. Instead, frame time consistency will greatly improve with better single-threaded performance.

        If you want the “best” arma experience like the author did, then Intel is untouchable.

      • Airmantharp
      • 3 years ago

      1. TR has been accused of *AMD* fanboyism, and now Intel? Take a hint.
      2. If Arma is the game to be played (I mean, he’s buying, right?), then what’s it to you?
      3. Some games scale beyond four cores, but not nearly all. And lucky for us, Intel makes some *strong* quad-core parts.
      4. Buying the best consumer-oriented CPU is a ‘stupid move’? Is it because you’re jealous?
      5. The 2600k almost sets a record for how long it has remained viable (and with a solid overclock, it still is). And that’s nice, but progress is always welcome and preferred.
      6. The clock-for-clock gain from one generation to the next isn’t a lot, but it isn’t negligible, and this is more than a single-generation jump.
      7. It’s an article based on a single data point. And your unsubstantiated criticism is showing *your* fanboyism.
      8. I wrote all this on a whim just to send you back to the top for more downvotes!

        • BurntMyBacon
        • 3 years ago

        1. Bipolar? Dissociative Identity Disorder? Schizophrenia (OP)? ;’)
        2. Are you trying to tell me that we don’t all play games that are exclusively modern, well optimized, and sanctioned as part of the current GPU test suite at our favorite tech reviewer?
        3. If there is anything Bulldozer taught us, it’s that if the game can use them, you should always choose more cores. Frequency and architecture can take a back seat. On an unrelated note, I’ll be sitting in the back for now.
        4. … Yeah. … I got nothin.
        5. You mean it hasn’t set a record already? Guess we’ll have to wait and find out if it’s record worthy when it [b<]finally[/b<] ceases to be viable. Which processor currently holds the record? I've got a Phenom II X4 955BE that still works for most things, but is really starting to show its age despite the massive overclock. 6. Nope. IPC performance improvements halted entirely after Sandybridge and it's not fair to count frequency improvements. We'll just have to throw more cores at the problem. All three benchmarks I've seen in the last year prove this. ... This one doesn't count. 7. Yes, but the single data point was neither selected by the OP, nor did it corroborate his point of view. Therefore, it must be dismissed with extreme prejudice. 8. Two can play that game.

          • Airmantharp
          • 3 years ago

          I’m betting that your response was at least partially facetious, so you probably don’t deserve the downvotes, but that happened. Anyway, responding gets back to number eight ;).

            • BurntMyBacon
            • 3 years ago

            I would have thought it was obvious. Though, I am genuinely surprised if the 2600K doesn’t own the record for longest viable processor. I don’t mind, though. It did promote number eight ;).

        • K-L-Waster
        • 3 years ago

        Re: point 1 — keep in mind that most of the Gamergater forii out there are so pro-AMD that they’re convinced Lisa Su is shilling for Intel and NVidia, so this response isn’t a huge surprise.

          • Airmantharp
          • 3 years ago

          I don’t doubt that there are plenty of emotionally invested adolescents that are genuinely incensed at this kind of personal introspection being published on a prominent tech site.

      • Gasaraki
      • 3 years ago

      What a stupid comment. Even if every generation only gave you 3% increase in performance clock for clock (2600K to 3600K was more than 5% increase in performance), you would have gotten 10%+ increase in performance. Then this is running at a higher clock so probably at least 25% increase total. Then you have other stuff like USB 3.1 support, PCI 3.0, M.2, etc etc. So much benefits from the new chipsets.

        • derFunkenstein
        • 3 years ago

        Since percentages compound, the actual increase from SB to Kaby Lake is more like ~16% before accounting for other increases like clock speed and memory bandwidth. TR’s testing showed time after time where the gains were closer to 40%. Everything from games (like GTA5) to web browsing (JetStream) to rendering (CineBench) were a little short of 50% faster.

      • chuckula
      • 3 years ago

      Kampan’s totally trolling you on twitter dawg!

      [quote<]Unhot take: Ryzen will be a perfectly competent CPU for most and will sell in large enough numbers. Intel will keep IPC crown & cut prices.[/quote<] [url<]https://twitter.com/jkampman_tr/status/832701673185218560[/url<]

    • JordanV
    • 3 years ago

    I am surprised at the low 4.2GHz clock you got on your 2600K since 4.7/4.8 seemed to be more typical Sandy Bridge OCs. Your 7700K 4.8GHz result also seems to be slightly on the lower side for Kaby Lake, but considering the high RAM speed, maybe it’s a very good one? I know people on Haswell that had a lot of difficulty getting 3200 with 4.7+ CPU OCs so that Kaby Lake memory controller is a beauty which is what this game needs.

    Based on reports from Steam and your report:

    2500K @ 4.3GHz DDR3-1648 22.3fps (Sandy i5)
    2500K @ 4.7GHz DDR3-2133 36.1fps (Sandy i5)
    2600K @ 4.2GHz DDR3-2133: 31.0fps (Sandy i7)
    3930K @ 4.8GHz DDR3-2400: 40.1fps (Sandy-E i7)
    6700K @ 4.5GHz DDR4-2133? 42.8fps (Skylake)
    6700K @ 4.5GHz DDR4-3200 50.7fps (Skylake)
    7700K @ 4.8GHz DDR4-3733 56.5fps (Kaby Lake)

    It looks like if you want the best out of BIS engine games, shelling out for high performance RAM is critical. Though you gotta lot of balls buying now with Ryzen so close. Not that I’m the first, or second, or third person to say that.

    * Update 2/17

    Some of the results are from an older version of ArmA (1.54) so I reran the benchmark on my 3930K on the new one (1.66). The newer version is maybe 2-10% faster.

    3930K @ 4.6GHz DDR3-2400: 39.2 vs 38.6
    3930K @ 4.7GHz DDR3-2400: 42.8 vs 39.8
    3930K @ 4.8GHz DDR3-2400: 44.3 vs 40.1

    Tighter timings might be another path to higher performance. My DDR3-2400 results are with loose 11-13-13-35-2 timings versus the following DDR3-2133 result with 10-12-12-34-1 timings.

    i7-3930K @ 4.7 w/ DDR3-2133 – 43.2

    this is roughly comparable to the i7-7700K @ 4.5 w/ DDR4-2400 based on Fish’s results.

      • hansmuff
      • 3 years ago

      Meh, I couldn’t get past 4.5 without spending a lot of money on better cooling than the Corsair A70, really not a bad air cooler. Left it on 4.4 and it was wonderful. Depending on ambient temperature/case a 4.2GHz max stable doesn’t seem all that far off.

      • drfish
      • 3 years ago

      I’ve run my 2600K @ 4.6G GHz but it wasn’t happy about it in every game. In fact, it’s never been 100% Prime-stable at more than 4.2 during it’s entire life. It was always pretty good about temps living under my TRUE tower. It’s lived in two mobos with the same results too. I think it’s just kind of a dud in the overclock department.

      • w76
      • 3 years ago

      I ran my 2600K at 4.8ghz for a while, but it required some voltage, so I dropped to 4.2ghz like fish and the least amount of voltage I could get away with, on the off chance that hardware had to survive a while before something better came out. I’d of never guessed I’d have to wait this long before new options started looking appealing, so I’m extremely happy I didn’t opt to roast it with excessive voltage.

      In fact, it’s longevity means I might not even attempt to OC anything in the future, since the pace of improvement has so dramatically slowed.

      It almost makes the Star Wars “old high-tech” universe seem feasible, where everything is high-tech but has been around almost unchanged for decades.

      • travbrad
      • 3 years ago

      [quote<]Though you gotta lot of balls buying now with Ryzen so close. Not that I'm the first, or second, or third person to say that.[/quote<] Based on all the leaks Ryzen is looking like a good overall performer in a mix of highly threaded and more lightly threaded workloads, and generally delivering more cores/threads at their price points than Intel. It's also going to be a huge step up from Bulldozer/Piledriver in both IPC and efficiency. The one thing it's not going to do though is become the IPC/clock speed king for lightly threaded workloads like ARMA 3/DayZ. You can debate whether it's worth settling for less cores/threads for a bit more IPC, but only each person and their particular use cases can answer that question. I'm just glad there will actually be some tradeoffs to consider now instead of Intel having such an IPC advantage that extra AMD "cores" basically didn't even matter. I think someone buying Kaby Lake now will be pretty satisfied with their gaming performance even after Ryzen comes out, although they maybe could have saved a few bucks by waiting, depending how aggressive AMD is with their pricing and production.

        • ImSpartacus
        • 3 years ago

        So much this.

        If your favorite games use the Arma engine, then you need to go Intel. There’s just no question.

        Games get more consistent frame times from higher single threaded performance. And Arma is on the far edge of that “gaming” spectrum with respect to benefiting from absurd cpu configs.

        So when you see that top Ryzen parts have inferior ipc AND clock speed, you know that it’s just not going to work for someone that wants the best Arma performance.

        Ryzen has a niche and it’ll do fine in the market (even in less ambitious gaming), but Arma is so far outside of its niche that it isn’t even funny.

      • jihadjoe
      • 3 years ago

      I’ve always thought my i7-3820 @ 4.5 was still good even in ArmA. I guess it’s dat quad channel memory at work.

    • Ph.D
    • 3 years ago

    Nah, I plan to use my 2600K until the 10nm Intel CPUs come along. (Or maybe until AMD makes an unlikely comeback!)
    I do admit it suuuure is taking a long time. I originally expected to get a new CPU in 2016 but the performance increases are just not there to warrant it if you ask me.

    I just spent my money on a GTX 1070 instead and that has allowed me to play most games at great quality/resoltion/frame rate. It’s still snappy enough in regular use as well.

      • Flying Fox
      • 3 years ago

      I’m still rocking the i7-875K.

        • Ph.D
        • 3 years ago

        If it works, it works!

    • chuckula
    • 3 years ago

    I vote we turn this into a series:

    First, a story by the guy with the Bulldozer who gets Colton’s old Sandy Bridge system.

    Then a story by the same guy when he gets RyZened.

    • demani
    • 3 years ago

    [quote<]My computer is behind my monitor on a corner desk, so I'll never see inside it anyway despite the fact that the Corsair Obsidian 450D case I chose does have a window.[/quote<] Weeeellll. You can call it bias lighting, and then you will be one of the cool kids. Seriously though, I've found a light behind my monitor is actually helpful at reducing eyestrain, particularly in a semi dark room. Give it a shot-pick a mellow color (a little more yellow than white) and see how it treats you.

    • CB5000
    • 3 years ago

    I did the same. Upgraded a core i5-2500K overclocked to 4.5 Ghz to a core i7-7700K. Still not much noticeable difference in most games since they were all running pretty fast with a beefy GPU, but things like web browsing is noticably faster.

    Also the BIG difference came with Virtual machine handling.

    • flip-mode
    • 3 years ago

    It has been a LOOOOOONG time since anyone regretting pulling the trigger on an Intel build even though and AMD launch was immanent. If I had to bet on which way this will turn out, I’d have to bet that the long streak will not be broken.

    • cygnus1
    • 3 years ago

    I was staring this Kaby upgrade in the eye toward the end of last year and chose to skip it. I was on an i5 4690s which was still very decent but I was itching for over 4GHz in an i7 type of upgrade. I looked at Kaby, but honestly it’s a fairly expensive pill to swallow coming from Haswell for much less of a performance improvement of someone like you coming from Sandy. I ended up picking up a used i7 4790k off ebay for under $300. The low power S i5 being intended for a new firewall I want to build for the new 1gb internet I got not long ago. Under $300 for that upgrade, since I was able to reuse the MB and RAM, was a lot easier to justify than the near $750+ I would’ve had to spend to go Kaby i7 (CPU, MB, and RAM).

    Very nice build though. Definitely a good bump over Sandy.

    • Voldenuit
    • 3 years ago

    Fish, you idiot, Ryzen is almost here! You should have waited.

    EDIT: Beaten to it.

      • psuedonymous
      • 3 years ago

      Given his workloads were very much geared towards single-threaded performance, it seems very unlikely waiting for Ryzen will make a jot of difference. Everything we’ve seen from AMD officially (which sadly is very, very little) and from ‘leaked’ benchmarks, has shown Ryzen as doing gangbusters on multi-thread workloads, but on single-thread workloads coming in at about ‘Haswell-Broadwell-ish’ clock-for-clock. With the 7700k and the price-comparable R7 1700X, the 7700k has an edge of 700MHz even ignoring IPC gains entirely. While Ryzen could potentially* overclock well, so can Kaby Lake (with 4.8GHz being reasonable unmodified, and north of 5GHz not impossible).

      * AMD have perennially released CPUs that have plenty of cores, but not much grunt-per-core, to the point of it being their reputation. I’d expect that they’d want to shift that image, so if Ryzen has lots of headroom to overclock, I can think of no reason whatsoever for AMD to leave that performance on the table and not release Ryzen with clocks as absolutely as high as they can get them for available yields.

    • evilpaul
    • 3 years ago

    I recently upgraded from an i5 4690K to an i7 7700K and was able to hit the magic 5.0Ghz using an H115i (I already had the cooler). I had an ASUS Z97-AR and went with the PRIME Z270-A as well and haven’t had any problems with it.

    Well, the last PCI-e slot has the USB3 front panel header a bit too close to it.

    • tipoo
    • 3 years ago

    I don’t expect Ryzen will provide any regret, if the goal was a large single threaded performance gain over Sandy Bridge. Right now I think most thought is Ryzen = IVB IPC, with more cores for less money than Intel. Not with Kaby lake per-core performance/IPC.

    • NTMBK
    • 3 years ago

    Pfft, that’s nothing, I upgraded from a Phenom II with DDR2 to a 6700. Made a hell of a difference to Total Warhammer.

      • Anton Kochubey
      • 3 years ago

      I couldn’t wait that long, and migrated from Phenom II with 8 GB RAM to a 4790K with 32 GB. Added an SSD, too. Using the PC afterwards literally felt like I strapped a rocket booster to it.

        • ozzuneoj
        • 3 years ago

        LITERALLY?

          • Dazrin
          • 3 years ago

          Well, you should hear the graphics card he put in it….

    • hansmuff
    • 3 years ago

    Having done the exact same upgrade just a few weeks ago, I’d like to lend my perspective:

    Platform upgrades I appreciate:
    I can now add a shiny M.2 NVMe SSD without any platform issues. Nice. I realize there are PCI-E cards for that, but DMI is slower on the P67A. So full speed 960 PRO SSD here I come.

    Onboard sound keeps improving and the current iteration of the S1220 audio on the ASUS Maximus IX HERO has allowed me to ditch the add-on card. Finally.

    PCI-Express 3.0. Not that 2.0 was all that bad, but hey, faster! The new 1070 appreciates it.

    On performance:
    There are so many impressions. I’ll start with the web browser. It’s amazing how much faster the crappy reddit pages load now. I hate back-navigation on them, RES has to catch up and scroll the page around etc, it’s slow and clunky. The new machine has made this a LOT faster. Kraken bench also improved almost 100% so I’m sure it has to do with IPC, Speed Step 2 and other arch improvements.

    Games: Hard to say here because I also switched from a 970 to a 1070 and that’s not a HUGE jump but not small either. I will comment though that loading things is quite a bit faster. DOOM now loads in less than 1/2 the time it seems, the loading part was a little sluggish on the 2600k. Fallout 4 loves the new machine but load times seem about on par. Overwatch was very fast on the 2600k/970 already. Overall some nice improvements and I haven’t really played all that much. Star Citizen is still slow as balls, but I didn’t expect too much there anyway.

    Applications: Visual Studio are you kidding me with this ludicrous speed? From installation to loading to usage, it’s a lot more snappy. VS 2015 SP3 on the 2600k with extensions took about 6-8 seconds to come up. Now about 2. GUI is faster and C compilation rips through a large project about 50% faster. I haven’t done much with other applications. But installing stuff goes faster as well, pretty cool.

    Lastly, I sold all my old parts and made pretty good bank. After sale, my 2600k/16GB/MSI board TCO was $300 over the 6 years it ran. Not bad at all. It helped the sting of the upgrade.

    The “BUT RYZEN” crowd is not wrong. I’m partial to Ryzen as well. But hear me out on this: The Z270 chipset is a Z170 with a few changes, nothing major. It’s stable as fuck. Kaby Lake is a Skylake with a few changes, nothing major. It’s stable as fuck. RYZEN could be stable as fuck and I hope it is, but it’s very new on many fronts and carries more risk. I’m not spitting at AMD or anything, Intel has had their issues which I experienced first hand with the P67. This time around, it was good enough for me to know that I get absolutely top end single thread performance on a super stable platform. If you want to have an interesting insight into this aspect, read the “Specifications Update” documents Intel puts out for the 6700K and the 7700K, and compare the number of bugs(“Errata”). The 7700K carries over a number of them but even with a recent doc update, has far, far fewer defects. I like that.

    EDIT: Specification Update links!
    [url<]http://www.intel.com/content/www/us/en/processors/core/desktop-6th-gen-core-family-spec-update.html[/url<] [url<]http://www.intel.com/content/www/us/en/processors/core/7th-gen-core-family-spec-update.html[/url<] Those docs aren't the easiest read. It's also hard to compare them directly because some of the carryover bugs have new numbers in the 7th gen doc. Also a lot of issues have fixes, some BIOS, some microcode. Some microcode changes can affect performance. It's a pretty fascinating window into efforts that go on past a product launch.

      • NoOne ButMe
      • 3 years ago

      I am always surprised that overwatch actually runs well on some systems…. breaking the “runs on everything, runs on nothing well” that Blizzard managed to keep around for over a decade!

      I need to play that though. Been sitting installed for 2 months now.

        • hansmuff
        • 3 years ago

        Hmm, I’ve been a Starcraft 2 player and it’s had performance issues early on, but they did fix them. Diablo 3, same deal. I have found Overwatch to be less prone to issues, probably because the number of players and effects on screen is somewhat a known quantity. D3 and SC2 have much larger ‘windows’ of performance to work through.

        Anyway, I love Overwatch. You can enjoy the game for 30 minutes a week or 30 hours a week. Recent additions to game modes/arcade made it a lot more interesting too.

          • NoOne ButMe
          • 3 years ago

          Not most demanding campaign map, i5-3570k stock = <40fps average
          [url<]http://media.bestofmicro.com/F/F/381579/original/StarCraft-2-Ultra-FPS.png[/url<] 7700k probably hits 60fps average there. Compared to Nehalem parts at launch of SC2, 40% IPC and 50% higher boost clocks. 1v1 has been very good for a while at max I agree.

            • Airmantharp
            • 3 years ago

            My 6700k is at stock, and at 1440p, my two GTX970’s are putting me near the 165Hz max refresh of the monitor.

            You need faster GPU(s).

            • NoOne ButMe
            • 3 years ago

            DOING WHAT?
            1v1, only rare drops <60 in maxed team fights for me.
            Coop, near 100fps at start, mid-end game 40-50, 25-35 with stukov.
            Campaign, 10-150fps depending on map and how far into map.

            It is 1-2 thread CPU bound game, not GPU. I’ve downclocked GPU base before turbo 130mhz, measure real clockspeed reduction, and near zero performance change.

            Tons did use AMD 7970, so Nvidia probably does add FPS. So for that part of level and game, should be 70-80 average on that CPU.

            • Airmantharp
            • 3 years ago

            I’ll look again when I get a chance to be sure, but I don’t recall framerates ever getting too bad in that game.

        • MrDweezil
        • 3 years ago

        Where did “runs on everything, runs on nothing well” come from? I’ve put a decent amount of time into most Blizzard titles (aside from Hearthstone) and have badgered my friends into doing the same so I have people to play with and I would say their stuff “runs well” across the board.

          • derFunkenstein
          • 3 years ago

          “runs on everything, runs well on nothing” is usually something people say about Java appsβ€”at least, those written for browser plugins and mobile platforms.

          Blizzard’s stuff has always run well for me as long as I wasn’t using an IGP.

            • _ppi
            • 3 years ago

            Last Blizzard game that had this “runs on everything, runs well on nothings” was Diablo II. That game looked dated when it was shipped and it did not have the performance to compensate (unless you have Voodoo graphics, 2D was faster than Direct3D until one guy wrote Glide emulator for it).

            Their later titles did not share this disease. Sure the graphics is not top notch, but stable 60fps on 60Hz panel with all settings maxed out is not an issue.

            • derFunkenstein
            • 3 years ago

            Diablo II is weird, and not totally 3D. At least in single-player mode, the frame rate was capped at 25fps.

      • douglar
      • 3 years ago

      While there’s not question that the CPU/platform/Mhz upgrades make things faster, I am always left wondering what portion of the subjective “user interface feel” improvements come from installing a fresh copy of Windows on a fresh SSD.

      Any thoughts how to make an objective test for this?

        • ImSpartacus
        • 3 years ago

        Wipe the old machine and then do a fresh install on it for your testing? That oughta do it.

        I didn’t read the methodology, but I hope that was what the author did.

          • drfish
          • 3 years ago

          I didn’t necessarily follow the normal, clinical, TR approach. As a brand new build though, it was a clean install.

            • ImSpartacus
            • 3 years ago

            So your new “after” build was a clean install, but the “before” build was an existing “normal use” installation?

            That’s a little disappointing to hear, but I appreciate your candor and speedy communication.

            • drfish
            • 3 years ago

            That’s correct. FWIW, I run a tight ship, even my normal use install is pretty “fresh” IMO. It’s a very fair question though, I didn’t even think about it.

      • albundy
      • 3 years ago

      seems like the crab audio 1220 codec is a magnet for expensive high end motherboards. would I pay an extra $100-$150 for it? nope. I’ll just keep using my sound blaster X-Fi.

    • ColeLT1
    • 3 years ago

    Sweet build! Even going from Haswell to Kaby lake I noticed an almost doubling of my minimum frames.

    I upgraded to a similar setup, but opted for lower timing memory:
    [url<]https://www.newegg.com/Product/Product.aspx?Item=N82E16820232194[/url<]

      • chuckula
      • 3 years ago

      You did a bunch of delids too, didn’t you?
      How have they worked out?

        • ColeLT1
        • 3 years ago

        [url<]https://techreport.com/forums/viewtopic.php?f=33&t=119137[/url<] -Worst chip struggled at 5.0ghz, shipped it at 4.8ghz on air (but at a lower voltage 1.3v) -Middle chip struggled at 5.2ghz, went to a friend down the road, he is at 5.0ghz and had to bump the voltage once from 1.325v to 1.335v and has been without issue since. -Best chip would boot at 5.3ghz, but I could not get it stability test stable at 5.25ghz, got it stable at 5.2ghz, then backed it off to 5.1ghz @ 1.370v now. I noticed a couple of goofy things, in BL2 had some enemies spawn underground, then GTAV was not stable at all. I backed my uncore/cache from 5.1ghz to 4.9ghz and that seemed to stop the GTAV lockups. Delid the chips before I even booted them, so no before/after temps, but I do have delid Air, vs delid water comparison: Air, open test bench: Adaptive voltage 1.350v x264 stress test 5.1ghz = 89-90-91-84 1hr stable 5.08fps In case, watercooled Adaptive voltage 1.350v x264 stress test 5.1ghz = 75-70-75-68 2hr stable 5.04fp 14-20-16-16 C drop

          • ImSpartacus
          • 3 years ago

          Delidded before you even booted?

          Jesus, this guy [i<]builds[/i<].

    • DPete27
    • 3 years ago

    I see this article as a double edged sword. You’re highlighting stark performance differences that can be had by moving to a newer platform and/or faster memory, but the vessel is a 3 year old game that’s clearly atypical in it’s favoritism of memory bandwidth and single threaded performance compared to most games of that genre.

    Performance gains shouldn’t be swept under the rug just because they’re not the “norm” and I think Colton did a good job at explaining “my experience is clearly a niche case.” Although, according to steam, Arma 3 is still being played by a relatively large number of people. Even if IPC isn’t increasing by leaps and bounds each generation, those advances to add up for games/programs that favor it. 5% over 6 years is….35% increase(?) in IPC.

    …not sure what my point is, but I spent a few minutes typing this, and it’s the internet, so you get my 2 cents whether it’s useful or not!

      • ColeLT1
      • 3 years ago

      Add Elder Scrolls Online, Borderlands 2 & PS (with physx on max), and GTA V as games that have a noticeable (doubling of minimum FPS) just going from haswell to kabylake. I’d be willing to put money on any CPU limited game (looking at you MMOs) get a boost from the platform and memory speed change.

      I’m sad to say League of Legends did not improve haha, as I have never seen it stray from 144fps.

      I am going to be building a RYZEN setup for a friend for PLEX and BlueIris duty (h264 transcodeing), I want to throw my 1070 in it for a quick feeler, but I have a feeling I fall in the less threads/more speed camp for now with the games I play. The big question is, what core GHZ and DDR4 speeds RYZEN is going to reach.

        • DPete27
        • 3 years ago

        Seems like Crysis 3 (from TR i7-7700K review) scales well also. I actually like seeing these sorts of games used that show large performance differences in CPU reviews. Obviously you want to show one or two that have little/no benefit to keep readers in check also, but only 3 out of the 7 games TR benchmarked in the 7700K review showed any tangible benefit from CPU differences.

        I hear a lot about online/multiplayer games becoming CPU-dependent even when their single player modes are clearly not. For obvious reasons, those are hard to benchmark.

          • ColeLT1
          • 3 years ago

          Agreed. This is when the frame time graphs really helps prove that average fps is not the best metric for smoothness.

          Comparing 4.4ghz 4790k vs 4.5ghz 7700k, frame time of 8.3 ms corresponds to 120 FPS, and I game at 144fps (if the engine supports it). So every tick you see here is a stutter for me.
          Time beyond 8.3ms unless noted (lower is better)

          -Doom OGL
          2204 -> 334

          -Doom Vulcan
          141 -> 127

          -Crysis 3
          6936 -> 1876

          -FC4 (using 16.7 here because 88fps max at these settings)
          204 -> 104

          -DE:MD (using 33.3 here because 56fps max at these settings)
          54 -> 0

          -GTAV
          16997 -> 6283

            • Firestarter
            • 3 years ago

            I accidentally downthumbed you and can’t undo it πŸ™

            • ColeLT1
            • 3 years ago

            Thumbs up to you then! πŸ™‚

            • DPete27
            • 3 years ago

            True. I usually don’t pay too much attention to the “time spent beyond” graph if the 99th percentile graph is relatively similar among all the processors.

            In that spirit though, look at the “time spent beyond” graph for Arma 3….Yikes!!!! Talk about a jittery mess.

          • jessterman21
          • 3 years ago

          Really it’s just the one level with all the grass… it will use as many CPU cores/threads as you can give it. DAT GRASS

        • MOSFET
        • 3 years ago

        BlueIris and Ryzen? BI is QuickSync only.

          • ColeLT1
          • 3 years ago

          I wouldn’t say it is QuickSync only, since I am not using QuckSync on my blue iris server haha!

          But seriously, thank you for bringing this to my attention I need to enable my 3570k’s video and switch the encoder to QuickSync, did not realize they had added it in last year.

      • NoOne ButMe
      • 3 years ago

      It comes down to look at what application you use.

      If you play simulation games, RTS, etc. which pull lots of resources from 1-2 CPU cores, than an upgrade is probably very good for you.

      If you play none of those, upgrading will gain some. But much fewer benefits.

      • travbrad
      • 3 years ago

      Yep this article was clearly meant as more of a subjective “blog” post with a few quick benchmarks not a full review, so I don’t have a problem with focusing on ARMA 3. Everyone who has seen some CPU/GPU reviews will know that most games are more dependent on the GPU, but it’s worth pointing out these less common “niche case” scenarios where your CPU can make a big difference. Not everyone is only playing the latest most optimized AAA games being released.

      I saw similar gains in ARMA 3 moving from my 2500K to a 6700K. It basically took the game from borderline unplayable (on busy servers) to being a smooth experience. There are a few other games that benefit from more Ghz/IPC/memory bandwidth too. I know Planetside 2 and Kerbal Space Program showed pretty big gains as well, especially minimum framerates.

    • Firestarter
    • 3 years ago

    this article just sent my wallet cowering in the corner, pleading for mercy

      • allreadydead
      • 3 years ago

      Just keep repeating
      “I’m one with the cannonlake and cannonlake is with me”

        • Firestarter
        • 3 years ago

        I got cannonlake and coffee lake mixed up and thought you recommended I wait out kaby lake just to settle for yet another re-spin of the same 14nm CPU

    • chuckula
    • 3 years ago

    EVERY TIME ONE OF YOU GUIZE BUYS AN INTEL PRODUCT A BUFFALO CRIES!

      • EricBorn
      • 3 years ago

      MOO!

    • Bomber
    • 3 years ago

    I upgraded my wife recently off her 2500k and noticed similar things. She is primarly a Diablo and WoW player and moving even to Haswell (a free 4770-non k prompted the upgrade) and since she wasn’t overclocked went a long way. She comments about it regularly. So a win either way.

      • morphine
      • 3 years ago

      You upgraded your wife recently? How’d you accomplish that?

        • Neutronbeam
        • 3 years ago

        If he told you, would you go out and get one? They have those in Portugal, right?

          • morphine
          • 3 years ago

          Well, if there’s solid info, I could give a shot at trading my girlfriend in. Preferably before 20:30 today so I can cancel the reservation.

            • Neutronbeam
            • 3 years ago

            Wow, you really ARE a romantic Bruno–who knew?

            • morphine
            • 3 years ago

            I’m all heart. With RGB leds.

            • Wirko
            • 3 years ago

            People in Southern Europe, girls included, seldom let themselves be overclocked much. Don’t hurry upgrading.

        • brucethemoose
        • 3 years ago

        New watercooling reservoirs?

        And RGB LEDs!

        • Bomber
        • 3 years ago

        Very very carefully of course!

        • CuttinHobo
        • 3 years ago

        In Soviet Russia, wife upgrades YOU!

        • ronch
        • 3 years ago

        How I wish upgrading one’s wife is as easy as upgrading a PC.

        • Peter.Parker
        • 3 years ago

        With money, of course.

        • Voldenuit
        • 3 years ago

        Was… was it socket-compatible?

          • tritonus
          • 3 years ago

          Wife 2.0: Socket compatible, drama grades doubled, not even overc(l)ocking needed!

      • JalaleenRumi
      • 3 years ago

      You upgraded your wife….? Damn. Didn’t know we could do that.

      Edit : oops. Too late.

      • puppetworx
      • 3 years ago

      I hear Stepford is nice this time of year.

    • orik
    • 3 years ago

    Fish, you idiot, Ryzen is almost here! You should have waited.

      • drfish
      • 3 years ago

      That could be. πŸ™‚

      [i<]Edit: Don't down-vote his post, it's a joke.[/i<] πŸ˜›

        • DeadOfKnight
        • 3 years ago

        I just upgraded to a 5775c. Well, from an 875k. It was time.

          • deruberhanyok
          • 3 years ago

          Good upgrade! I have a feeling that games that take advantage of DX12 multi GPU will really like Iris Pro.

        • BurntMyBacon
        • 3 years ago

        [quote=”orik”<]Fish, you idiot, Ryzen is almost here![/quote<] [quote="drfish"<]That could be. :)[/quote<] Ryzen in-house CONFIRMED!!!

          • chuckula
          • 3 years ago

          +3 for use of CONFIRMED!!!

      • ImSpartacus
      • 3 years ago

      The sad thing is that he shouldn’t’ve waited for Ryzen with the expectation that he would get Ryzen.

      But he SHOULD have waited for Ryzen with the expectation that he would get Intel’s knee-jerk reaction to Ryzen (or at least reactionary sales).

      That is, the 7740K (and 7640K).

      [url<]http://wccftech.com/intel-core-i7-7740k-core-i5-7640k-amd-ryzen/[/url<] Higher clocks are never a bad thing, especially when you're starting with the "halo" part that already has pretty fierce clocks. Yeah, it's rumored to use LGA2066 and a larger TDP, but if he's the kind of person buying a $350 CPU that barely performs better than a $250 CPU, then he can justify the slightly pricier board and the extra RAM DIMMs (and hell, he might've already planned on using 4 DIMMs, lol). I don't normally recommend that someone wait for the "next best thing", but when you're preaching about how there's no substitute for superior ipc & clock speed, then you look a little silly for not getting the best.

        • tipoo
        • 3 years ago

        A 100MHz clock increase may not be worth waiting for either tbh. That probably just comes out of its overclocking headroom, and could achieve the same effect here. Maybe a few tens of dollars shaved off would be about it to wait for, which again, may not trump instant gratification πŸ˜‰

          • ImSpartacus
          • 3 years ago

          A 7740K is absolutely not necessary for a satisfactory experience. This is undeniable.

          However, the author makes it clear that he’s aiming for minimal compromises and is wholeheartedly willing to pay for that.

          [quote<]However, the lesson I learned after chasing perfect DayZ performance for years is that sometimes we just need CPUs to get things done now. In those cases, [b<]there's no substitute for clock speed, IPC, and, apparently, even memory bandwidth and latency.[/b<][/quote<] The best is the best. The 7740K will be the best. And since Coffee Lake is rumored to bring 6-core -S class CPUs (with presumably low-ish clocks), there's a very real possibility that the clocks on Kaby Lake (and/or Kaby Lake-X) could allow it to match Coffee Lake's single threaded performance (much like the hot clocked 4970K generally traded blows 27th the lesser clocked 6700K despite Haswell's inferior IPC). Furthermore, if the 7740K is based on LGA2066, then it'll benefit from a quad channel memory system. That's a healthy amount of bandwidth on hand. Even if a 7740K is simply binned better (and it surely is), then that, alone, could be worthwhile. So yeah, it's it really worth it? Probably not, but the author has made it clear that he's not looking for high value. He's looking for the best.

            • tipoo
            • 3 years ago

            In that case, the author will have 2.381% regret πŸ˜‰

      • CScottG
      • 3 years ago

      Plagiarism at its finest! ..Up-vote!

      • RAGEPRO
      • 3 years ago

      [url<]https://www.youtube.com/watch?v=noJKWn9XkvM[/url<]

      • CScottG
      • 3 years ago

      As it turns out, nope. ..Fish is NOT out of water.

        • drfish
        • 3 years ago

        Broken clocks and all that… πŸ˜‰

Pin It on Pinterest

Share This