Early Core i7-7700K testing reveals higher clocks and power usage

Intel's desktop Kaby Lake CPUs haven't even been officially announced yet, but the folks at Tom's Hardware got their hands on a sample of the blue team's biggest, baddest next-gen chip for the desktop so far—the Core i7-7700K—ahead of its launch. The site did what any of us would do in that enviable situation: test the chip's performance in both stock-clocked and overclocked configurations.

The basic specs of the Core i7-7700K that Tom's reports are about in line with the rumors and leaks we've seen so far. Tom's says the chip has a 4.2 GHz base clock and a 4.5 GHz Turbo speed, and its four cores and eight threads slip into a 95W TDP—up slightly from the Core i7-6700K's 91W figure. That TDP increase is borne out by a small increase in power consumption in the site's testing—141W under a stock-clocked Prime95 Small FFTs load for the Kaby Lake chip, up from 133W for the Core i7-6700K running the same torture test.

We won't spoil the site's detailed performance numbers, but on average, the stock-clocked i7-7700K pulled out a 3.6% performance improvement over the i7-6700K. Given its increased power draw, Tom's says that makes the Kaby Lake chip less efficient than its Skylake counterpart.

Of course, stock-clocked performance is one thing, but many folks eyeing the i7-7700K are doubtless wondering how Intel's improved 14-nm process technology translates into overclocking headroom. Every chip's overclocking performance will be different, of course, thanks to the silicon lottery. Where our Skylake chips generally have no issue hitting 4.6 GHz and sometimes 4.7 GHz, Tom's was able to take its i7-7700K to 4.78 GHz with 1.3V running through the core.

At those settings, the chip reached a dizzying 82° C above ambient (a chilly 15° C or 59° F, in these tests), compared to 60° C over ambient for the site's Core i7-6700K running at 4.6 GHz. Whatever a given i7-7700K's overclocking performance, it appears builders will still need to keep heavy-duty coolers on hand to realize maximum performance from their chips.

We still don't have any official word about Kaby Lake on the desktop, so it remains to be seen whether the performance, efficiency, and thermal behavior that Tom's observed with its i7-7700K sample are reflective of retail chips. Still, the numbers are intriguing. We'll hopefully learn more about Intel's next generation of desktop processors—and get our own idea of its performance—soon.

Comments closed
    • JosiahBradley
    • 3 years ago

    If you’ve been overclocking since haswell, performance has actually dropped for the average chip. My 4790k still overclocks high enough to beat out any IPC gain I’d get from upgrading. And the power curve doesn’t look much better. So why spend another 350$ for a cpu that might actually overclock worse and thus perform worse and pay another 200$ for a motherboard? That’s right, if you want any new platform features you have to upgrade the chipset and socket compatibility goes out the window. Not to mention I’m on DDR3 still and running at higher clocks and lower latency than DDR4 so my memory subsystem is actually faster too.

      • sweatshopking
      • 3 years ago

      I’m running a 4790k too, but no oc

    • Dysthymia
    • 3 years ago

    This makes it seem to me like Intel has had overclocking headroom that they’ve been holding back on, but relying on to help performance increases for the past several generations, dribbling it out bit by bit, year after year. And now they’re just about out.

    • Krogoth
    • 3 years ago

    Not really that shocking at all. People are still in denial thinking that semiconductors can still magically somehow negate the laws of physics.

    • End User
    • 3 years ago

    I’ll wait to read TR’s review.

    • ozzuneoj
    • 3 years ago

    Am I the only person confused by Tom’s “Temperature Over Ambient” readings?

    Are they saying that at 15C\59F ambient temp, their overclocked 6700K runs at 75C under load and the 7700K runs at *97C*???

    If so… what the crap? How is that even possible? I’m sure my ancient 2500K probably uses more power under load while overclocked, yet I don’t recall it ever breaking 65C under stress testing with a Thermalright Ultra 120 Extreme and a basic 120MM Antec Tricool (usually set to low).

    When discussing the changes in CPU\GPU power consumption and cooling requirements over the years, I’ve been told multiple times that it always comes down to TDP. You have to have enough cooling to dissipate the heat generated by the power consumed by the chip. So, how can one chip with similar specs (Kaby vs Skylake) and a tiny increase in stock TDP run over 20C hotter in the same conditions? And what was so magical about Sandy Bridge that allowed it to get away with a ~25% overclock without even running hot?

      • Duct Tape Dude
      • 3 years ago

      Your 2500K is built on a larger process which gives it less thermal density. Similar TDP over a smaller core area means a hotter chip.

      The 6700K and 7700K are the same CPU, the latter is just pushing the silicon to near-AMD levels of heat at OC.

        • RAGEPRO
        • 3 years ago

        You’re right. The 2500K also has a soldered IHS. The newer chips are not. That makes a big difference too.

          • Krogoth
          • 3 years ago

          Not as much you would like to think.

          6600K and 7700K have more core logic onto their silicon that consume more energy when loaded. They also have smaller surface area to transfer that thermal load. It isn’t that shocking that become blast furnaces when you crank-up the volt and megahertz.

            • RAGEPRO
            • 3 years ago

            You realize you just repeated Duct Tape Dude’s post, to which I replied ‘You’re right,” right?

        • ozzuneoj
        • 3 years ago

        I guess I always assumed that was one of the purposes of the IHS, to give more thermal area, but it makes sense that it could only do so much.

        As someone who has grown to hate hot-running computer components over the years, it really is looking like I’ll be sticking with my current system longer than I thought.

        If I ever find a super cheap 2600K\2700K second hand, I’d probably go that way, rather than consider going with a brand new space heater for $500 (6700K, motherboard and DDR4). Skylake looked okay, but didn’t entice me that much, and certainly not enough to justify the cost of upgrading. If Kaby is worse in any area (thermals), then its automatically a no-go for me.

        If ever AMD had a chance to get back in the game, this is probably it. I know there’s no way they’ll beat Intel on IPC, but I can hope for competitive performance in real workloads, good thermals, overclocking headroom and the right price. However, if it is only marginally faster than simply getting a 6 year old 2600K and overclocking it… 😮

          • Duct Tape Dude
          • 3 years ago

          [quote<]As someone who has grown to hate hot-running computer components over the years, it really is looking like I'll be sticking with my current system longer than I thought[/quote<] FWIW, I've never seen a processor die from heat. Intel's chips are good for extended periods at 100C and I've had a GPU to 130C several times and ~109C sustained while gaming (it's a long story) for years--still works fine. Now, memory on the other hand...

            • ozzuneoj
            • 3 years ago

            I’ve seen AMD chips die from heat, but no Intel chips personally.

            Killing the CPU isn’t really what I’m worried about with temps. High temps are bad for the rest of the system and you end up having to put extra work into cooling (either louder or more expensive solutions).

            Also, I don’t use air conditioning where I live and it can get fairly warm in the peak summer months, so its nice knowing my PC isn’t already operating on the brink of melt-down. I’m not sure if there’s technically any difference in overall heat output (it seems like heat transfer from the die to the cooler is the issue here) so I won’t comment on whether my SandyBridge dumps less heat into the house or not.

          • dodozoid
          • 3 years ago

          It doesent matter how hot the CPU runs, it always dissipates only as much heat as they consume power. The difference is in cooling solution including IHS. The cooling determines how big temperature difference is needed to transfer that amount of energy.

          • smilingcrow
          • 3 years ago

          “If I ever find a super cheap 2600K\2700K second hand, I’d probably go that way, rather than consider going with a brand new space heater for $500 (6700K, motherboard and DDR4). ”

          Don’t confuse a high temperature with a high heat output as they are very different things.
          A space heater is a CPU that consumes a lot of power rather than one running at a high temperature and both of these have the same TDP.
          A 4.5W Core M can run at very high temps but will never put out much heat.

            • ozzuneoj
            • 3 years ago

            Sorry, I guess I meant “tiny insulated fireball” more than space heater… bad wording on my part. I figured that overall heat output would be similar (hence my later comment).

          • Krogoth
          • 3 years ago

          The only purpose of the IHS is to protect the silicon from the physical stresses of heatsink/waterblock and give them a consistent fitting surface. Former Socket A owners can attest to this.

            • ozzuneoj
            • 3 years ago

            Don’t forget socket 370. At least AMD used those little pads in corners to stop the heatsink from rolling during application. I cracked the corner off of a 1Ghz p3 last year while attaching the heatsink. First time I’d ever damaged a CPU this way. I was pretty sad. Certainly makes any CPU with a heat spreader more appealing.

      • bfar
      • 3 years ago

      Skylake runs a bit hotter than when running AVX instructions, and Haswell way hotter in the same scenario. Sandybrige and Ivebridge don’t support those instructions at all, hence they appear to run cooler in the same test applications.

      Only more recent iterations of Prime95 use AVX.

    • Firestarter
    • 3 years ago

    sounds like kaby lake would benefit from delidding just as much as skylake

    • Bensam123
    • 3 years ago

    So curious why people think this is going to be crazy different from the mobile releases months ago? It’s basically the same chips just operating faster. We already saw the performance numbers from those.

      • K-L-Waster
      • 3 years ago

      Anyone who thought was going to be crazy different is going to be (predictably) disappointed.

      Given that this is the “polish” phase of their new 3 phase development cadence, anyone who expected it was going to be exciting wasn’t paying attention.

        • Bensam123
        • 3 years ago

        Should’ve seen the responses I got in the topic months ago when talking about kaby mobile a if it was normal kaby lake. Apparently this was supposed to be quite a bit different from mobile.

    • chuckula
    • 3 years ago

    At about 123 mm^2 (including the graphics), Kaby Lake is definitely not Intel’s biggest baddest chip.

      • RAGEPRO
      • 3 years ago

      I’m going to give you the benefit of the doubt and assume you were joking. 🙂

        • chuckula
        • 3 years ago

        You mean I was giving the article the benefit of the doubt and assumed it was joking when it called Kaby Lake Intel’s biggest baddest chip?

          • Jeff Kampman
          • 3 years ago

          “Next-gen chip for the desktop so far” is my way of leaving the door open.

      • Firestarter
      • 3 years ago

      I wish they’d swap the GPU out for a staggering amount of cache

      edit: well yes that’s skylake-x/kaby lake-x. I just wish we didn’t need to sacrifice clockspeed and a limb/organ for the privilege of ditching a iGPU [i<]that we don't use anyway[/i<] editedit: and of course I'd just like MORE CACHE

        • MOSFET
        • 3 years ago

        Firestarter, with your ideology (and perhaps an orange face) I’d vote for you for president.

          • Firestarter
          • 3 years ago

          please, no, just don’t

        • VincentHanna
        • 3 years ago

        I use mine for PhysX. So there.

        Oh, plus video decode/. So there.

          • Firestarter
          • 3 years ago

          you use physx on an Intel iGPU? Please do elaborate

    • chµck
    • 3 years ago

    We dont know if it’s an engineering sample or not.
    True retail part may be slightly better.

    • Voldenuit
    • 3 years ago

    I already thought Skylakes were too hot when overclocked. Gonna give a pass on Kaby. Is intel really so bullish they’d repeat Coppermine or Northwood?

      • I.S.T.
      • 3 years ago

      Do you mean Prescott? Northwood was a good OCer.

        • Voldenuit
        • 3 years ago

        Yeah, Prescott was called ‘Presshot’ so probably more appropriate.

        But I tend to lump all the Netbursts into the “too damn hot” category, personally.

          • I.S.T.
          • 3 years ago

          Northwood was actually quite nice in power consumption, though. Sometimes better than the Athlon XP if both were OCed. It was definitely nowhere near as bad as prescott.

          Edit: Just did some checking. The Northwood was better than many Athlon XPs in the low to middle end, but exceeded it when pushed to its absolute limits in stock(Helps my research that the Northwood was released in 3.6 GHZ form).

          Not sure what the Athlon XP pushed to its absolute limits would draw. I’d have to do more than a quick google search to find out what that is.

        • curtisb
        • 3 years ago

        Except for that pesky Sudden Northwood Death Syndrome.

          • I.S.T.
          • 3 years ago

          Taking a look, it seems many of those were OCed with stupidly high voltages. Overvolting often leads to processor frying/death of some sort. It’s much safer now, of course, but it’s still risky.

            • curtisb
            • 3 years ago

            It didn’t take too much above stock to kill them. I killed two that were barely overclocked and had very little voltage adjustment. Had several friends that were in the same situation. The stupidly high voltages only made it happen faster. It was a pretty big deal so most who had minimal voltage increases had enough warning so they could back theirs down before any damage occurred.

        • albundy
        • 3 years ago

        hah, good times. i burnt mine out and drilled a hole in it to make a nice keychain, but it couldnt even do that right and eventually broke in half.

      • curtisb
      • 3 years ago

      EDIT: Replied in wrong place.

      • adampk17
      • 3 years ago

      Coppermine was a PIII and, if I remember correctly, was considered a pretty good chip.

        • The Egg
        • 3 years ago

        Yeap. Both Coppermine and Northwood C were good chips. He’s thinking of Prescott and…..I’m not sure what else. Maybe the original P5 Pentium with the FPU error?

        • I.S.T.
        • 3 years ago

        First version had issues when hitting past 1 GHZ(Crashes, etc) on officially released processors. They did a respin or something and it was a lot better past 1 GHZ on officially released processors. Much more stable, etc.

          • adampk17
          • 3 years ago

          Yes, I believe that was the 1.13 GHz version. Rushed out (my speculation) because AMD was extending their run of dominance with the T-Bird at the time.

          Intel released the Tualatin core PIII which clocked higher successfully. I think that chip is also the parent of what would become Conroe, if memory serves.

    • the
    • 3 years ago

    Kind of a lack luster upgrade. Kaby Lake appears to simply be Sky Lake v2. It isn’t terrible but disappointing for those expecting more of a performance jump.

      • Duct Tape Dude
      • 3 years ago

      I mean, wasn’t this expected? The CPU is unchanged but better binned, the GPU is slightly beefier, and IO is slightly more modern. It keeps investors happy and lets Intel draw out its transition to 10nm. Mobile parts benefit more from the improved binning than desktop parts.

        • the
        • 3 years ago

        I guess I was expecting higher clocks or a few other minor changes compared to the previous Sky Lake part to spur additional sales. For example, doubling the L2 cache to 512 KB per core like in the coming server parts. AVX-512 is likely too much to ask from Intel considering how they want to segment that but it would have helped distinguish the design. Larger L3 or even the additional of L4 cache would have been welcome too.

        The real disappointing area is on the GPU side and chipset. No PCIe 4.0 as this would have been a good entry point. No DisplayPort 1.3 or variable refresh support. While these technologies are on Intel’s roadmap, it appears that they will arrive with Cannon Lake. The big new feature of the chipset is four more PCIe lanes for an additional M.2/U.2 support. 3D Xpoint SSD support is a bullet point on some of Intel’s slides but only for raw storage: those drives should work in any M.2 slot as they’re using a NVMe storage controller. GPU gets a modest feature boost with HDR support being added for H.265 acceleration.

        The features that would really give a performance jump would increase die size but if Intel is stuck on the same process for two iterations in a row, something has to give so that people have a reason to upgrade.

      • EndlessWaves
      • 3 years ago

      Good thing nobody was disappointed then.

    • ronch
    • 3 years ago

    Zen is the chip to get in 2017.

    /b

      • zdw
      • 3 years ago

      Even if it isn’t, hopefully it will will start a price/performance/core-count rivalry that will improve Intel’s lineup.

      • nico1982
      • 3 years ago

      Truth is that Intel performance being no more a moving target is a good nows for AMD (and consumers). If they manage to bring Zen within Haswell performance ballpark, and do it in time, AMD might be able to claw back a sizeable chunk of marketshare from Intel.

      • Demetri
      • 3 years ago

      I’m hoping for an 8 thread Zen that is basically a 4770K for $200.

        • Voldenuit
        • 3 years ago

        You’re not far off the rumor mill:

        [url<]https://www.pcper.com/news/Processors/Rumor-Leaked-Zen-Prices-and-SKUs[/url<] [quote<][list<] Zen SR3: (65W, quad-core, eight threads, ~$150 USD) Zen SR5: (95W, hexa-core, twelve threads, ~$250 USD) Zen SR7: (95W, octo-core, sixteen threads, ~$350 USD) Special Zen SR7: (95W, octo-core, sixteen threads, ~$500 USD) [/list<][/quote<]

        • nico1982
        • 3 years ago

        The problem is that 4770K performances implies 4770K price, eh.

      • K-L-Waster
      • 3 years ago

      That assumes there is a chip to get in 2017 at all.

      This CPU generation could end up being a Brewster’s Millions “None of the Above” affair…

      • the
      • 3 years ago

      Well this is good news for AMD as Intel isn’t moving forward in any significant way. The performance gap AMD needs to cross isn’t getting any bigger so it gives them an opportunity to catch up.

    • TheRazorsEdge
    • 3 years ago

    Sounds underwhelming.

    The new hardware acceleration may be appealing for mobile, but I’m no longer worried that I jumped on some Cyber Monday deals on Skylake parts.

      • Chrispy_
      • 3 years ago

      Yep. Skylake is/was worth buying because DDR4 finally superceded DDR3. Everything else can be retrofitted to an older chip via expansion cards and the performance of older generations is still comparable in most tasks.

      Kaby seems to be of no value over Skylake for desktop builders, and if your DDR3 is fast enough, Skylake offers very little of value over Haswell too.

      Literally the only thing of note since Sandy has been the change from DDR3 to DDR4, and apart from that the main benefits across generations have been “improved IGP” which is very useful for laptops but of no consequence whatsoever to system builders.

        • JustAnEngineer
        • 3 years ago

        Besides DDR4, mid-range Skylake motherboards (e.g.: [url=http://www.gigabyte.com/products/product-page.aspx?pid=5832#ov<]GA-170MX-Gaming 5[/url<]) also include USB 3.1 type C and M.2 PCIe+NVMe that weren't found on motherboards back when Haswell was top dog.

Pin It on Pinterest

Share This