Checking in on Intel’s Core i7-5775C for gaming in 2018

Way back in the sands of August 2015, we got a rare sample of the exotic Core i7-5775C in the TR labs. This Broadwell CPU was unique—and remains unique—among Intel desktop chips because of its 128-MB slice of on-package embedded DRAM, or eDRAM. That on-package memory served as a large L4 victim cache for the entire CPU.

Intel includes eDRAM on processors with its Iris Pro integrated graphics processors to save them the trouble of having to depend entirely on system RAM for some types of graphics-related memory accesses, but that turned out not to be eDRAM’s only benefit. Our testing at the time suggested that the i7-5775C’s eDRAM sometimes gave it an edge in 99th-percentile frame times versus the Core i7-6700K, even as it was hampered by clock-speed deficits versus the Skylake part and a last-generation platform that could only use DDR3 RAM.

The chart that started it all

Since that time, the i7-5775C has developed a (possibly exaggerated) reputation among enthusiasts as a 99th-percentile frame-time monster unequaled by anything that has come before or since. Part of that may be our fault—we didn’t test the i7-5775C much in subsequent CPU reviews. Broadwell desktop parts proved both costly and rare as hens’ teeth at retail, and the eDRAM didn’t deliver any notable productivity boost outside of gaming to warrant the extra cost. We generally recommended that people who weren’t obsessed with the very best 99th-percentile frame times go with Skylake parts instead.

Perhaps in the absence of data, the i7-5775C’s stature has grown to the point that enthusiasts still inquire about how its 99th-percentile-frame-time prowess holds up over three years after we uncovered its intriguing performance characteristics. Since people still ask, we figured we ought to answer them with our latest arsenal of CPU-bound games and frame-time measurement tools.

Before we address that question, though, I would temper the expectations of our eager inquirers a bit. A lot of water has flowed under the bridge in three years’ time. AMD’s Ryzen CPUs have thoroughly reshaped what the core and thread count of the mainstream PC looks like, and Intel has responded by adding more cores and threads to its own CPUs. The blue team has pushed single-core clock speeds ever higher, too, from 4.2 GHz on the Skylake i7-6700K to 4.9 GHz on the recently-launched Core i7-9700K or even 5 GHz on the Core i9-9900K.

The improvements haven’t stopped there. DDR4 memory has gotten faster, and overclocked sticks running at 3200 MT/s are now a de facto enthusiast standard versus the 2133 MT/s RAM we first tested with Skylake. Nvidia has released two new graphics architectures in that time, as well, and mere $250-ish graphics cards now deliver performance similar to that of the $550-ish GTX 980 we used back when we originally tested the Core i7-5775C. In the high-end graphics-card market, the GTX 1080, GTX 1080 Ti, and RTX 2080 Ti have all set new bars for what’s possible from a graphics card.

At least for gamers chasing the highest frame rates and lowest 99th-percentile frame times, those developments mean the ceiling for dizzying frame rates has gotten a lot higher, and that means CPU-bound games need a lot more oomph behind them to support the highest frame rates and lowest frame times. That’s especially true of the single-threaded performance department, since that’s how a lot of games that care about processor performance end up bottlenecked.

Given those developments, the i7-5775C doesn’t look particularly well-positioned to get the most out of today’s most demanding graphics cards, eDRAM or otherwise. It’s long seemed unlikely to me that whatever magic led to its above-its-weight-class 99th-percentile frame-time performance in our initial review has held up. Four-core, eight-thread CPUs with DDR3 memory already trail their Skylake and Kaby Lake counterparts when we set up CPU-bound high-refresh-rate gaming experiences, and that eDRAM would have to do a lot of work to catch up entirely with newer chips.

See, Broadwell was never a high-clocking, power-hungry desktop beast (at least for mainstream systems). The 65-W i7-5775C’s four cores and eight threads topped out at a peak speed of 3.7 GHz, and I’ve observed all-core speeds of 3.6 GHz from that chip under good cooling. Those figures were low when Broadwell and Skylake were new, and they’re low now.

Slide courtesy Anandtech

The 5775C’s eDRAM may have given its memory subsystem a boost when we were treading carefully with DDR4 speeds, but now that DDR4-3200 lets Intel’s memory controllers approach 50 GB/s of unidirectional bandwidth in directed testing, the 50 GB/s of bidirectional bandwidth (100 GB/s in aggregate) that Broadwell’s eDRAM cache offers might not be as impressive as it once was. Main memory latency for Intel chips is less of a concern than it used to be, too. Intel claimed that accessing eDRAM would require 30 ns to 32 ns when it combined the Crystal Well eDRAM die with Haswell CPUs. That’s still a fair bit quicker than the 43 ns to 45 ns we record with today’s swiftest Intel chips, but it’s not as far off as CPU memory controllers used to be.

For an idea of the shape of the CPU market today, even our gaming-value favorite, the $200-ish Core i5-8400, has an all-core boost speed of 3.8 GHz. Its Skylake cores are fabricated with a twice-improved version of Intel’s 14-nm process. It benefits from DDR4-2666 at a minimum, and it can be paired with even faster RAM if you opt to run it on a Z370 motherboard. We already spoke of Coffee Lake and Coffee Lake Refresh Core i7s, where chips that bear stickers similar to the i7-5775C’s $366 to $377 suggested price boast peak single-core clock speeds in the range of 4.7 GHz to 4.9 GHz. That’s nothing to sniff at if you’re after the best high-refresh-rate gaming experiences, even if the i7-5775C can be overclocked to make up some of the difference.

A sampling of Core i7-5775Cs on eBay as of this writing

Even if the i7-5775C did deliver exceptional performance and you did want to take the plunge on one of these chips to underpin your 2018 gaming system, it’s not as though these chips are abundant or inexpensive. Three i7-5775Cs are on eBay as I write this, and buyers want anywhere from $329.95 to $350 for their wunderchips. New-in-box models of the Z97 motherboards you’d want to run this chip are no longer available at e-tail, although used boards remain reasonably priced on eBay—a pleasant surprise, if you’ve gone shopping for vintage Intel motherboards of late. Enthusiast DDR3 RAM isn’t much cheaper than run-of-the-mill DDR4-3200 modules, though.

Prices for Intel’s latest enthusiast chips have risen recently thanks to supply issues, to be sure, but one just wouldn’t be saving a ton of money on the road to hypothetical 99th-percentile bliss by getting onto a 2014-era platform. You could miss out on features we take for granted these days, too, like reliable NVMe boot (which has never worked consistently on our Z97 testbed), PCIe 3.0 lanes from the chipset, USB 3.1 Gen 2 ports, and much more. The fact is that a four-year-old PC is still a four-year-old PC, and we couldn’t in good conscience recommend that users start with such an aged platform unless the i7-5775C was still delivering indisputable miracles for gaming performance.

I’ve just finished reviewing and testing a wide range of Intel processors for our Core i9-9900K review, and our frame-time digestion tools are warmed up and hungry for more. I figure now is as good a time as any to see how much of the i7-5775C’s magic remains. I’ve dusted off our trusty Z97 motherboard and DDR3 RAM to see what happens when you ask a modestly-clocked quad-core with eight threads and a massive cache to drive today’s single most powerful video card. Let’s find out.

 

Our testing methods

As always, we did our best to deliver clean benchmarking numbers. We ran each benchmark at least three times and took the median of those results. Our test systems were configured as follows:

Processor Intel Core i7-8700K Intel Core i7-9700K Intel Core i9-9900K
CPU cooler Corsair H100i Pro 240-mm closed-loop liquid cooler
Motherboard Gigabyte Z390 Aorus Master
Chipset Intel Z390
Memory size 16 GB
Memory type G.Skill Flare X 16 GB (2x 8 GB) DDR4 SDRAM
Memory speed 3200 MT/s (actual)
Memory timings 14-14-14-34 2T
System drive Samsung 960 Pro 512 GB NVMe SSD
Processor AMD Ryzen 7 2700X AMD Ryzen 5 2600X
CPU cooler EK Predator 240-mm closed-loop liquid cooler
Motherboard Gigabyte X470 Aorus Gaming 7 Wifi
Chipset AMD X470
Memory size 16 GB
Memory type G.Skill Flare X 16 GB (2x 8 GB) DDR4 SDRAM
Memory speed 3200 MT/s (actual)
Memory timings 14-14-14-34 2T
System drive Samsung 960 EVO 500 GB NVMe SSD
Processor AMD Ryzen Threadripper 2950X AMD Ryzen Threadripper 1920X
CPU cooler Enermax Liqtech TR4 240-mm closed-loop liquid cooler
Motherboard Gigabyte X399 Aorus Xtreme
Chipset AMD X399
Memory size 32 GB
Memory type G.Skill Flare X 32 GB (4x 8 GB) DDR4 SDRAM
Memory speed 3200 MT/s (actual)
Memory timings 14-14-14-34 1T
System drive Samsung 970 EVO 500 GB NVMe SSD
Processor Core i7-5775C Core i9-7900X
CPU cooler Corsair H100i Pro 240-mm closed-loop liquid cooler
Motherboard Asus Z97-A/USB 3.1 Gigabyte X299 Designare EX
Chipset Intel Z97 Intel X299
Memory size 16 GB 32 GB
Memory type Corsair Vengeance Pro 16 GB (2x 8 GB) DDR3 SDRAM G.Skill Flare X 32 GB (4x 8 GB) DDR4 SDRAM
Memory speed 1866 MT/s (actual) 3200 MT/s (actual)
Memory timings 9-10-9-27 14-14-14-34 1T
System drive Samsung 850 Pro 512 GB SATA SSD Intel 750 Series 400 GB NVMe SSD

Our test systems shared the following components:

Graphics card Nvidia GeForce RTX 2080 Ti Founders Edition
Graphics driver GeForce 411.63
Power supply Thermaltake Grand Gold 1200 W (AMD)

Seasonic Prime Platinum 1000 W (Intel)

Some other notes on our testing methods:

  • All test systems were updated with the latest firmware, graphics drivers, and Windows updates before we began collecting data, including patches for the Spectre and Meltdown vulnerabilities where applicable. As a result, test data from this review should not be compared with results collected in past TR reviews. Similarly, all applications used in the course of data collection were the most current versions available as of press time and cannot be used to cross-compare with older data.
  • Our test systems were all configured using the Windows Balanced power plan, including AMD systems that previously would have used the Ryzen Balanced plan. AMD’s suggested configuration for its CPUs no longer includes the Ryzen Balanced power plan as of Windows’ Fall Creators Update, also known as “RS3” or Redstone 3.
  • Unless otherwise noted, all productivity tests were conducted with a display resolution of 2560×1440 at 60 Hz. Gaming tests were conducted at 1920×1080 and 144 Hz.

Our testing methods are generally publicly available and reproducible. If you have any questions regarding our testing methods, feel free to leave a comment on this article or join us in the forums to discuss them.

 

Crysis 3

Even as it passes six years of age, Crysis 3 remains one of the most punishing games one can run. With an appetite for CPU performance and graphics power alike, this title remains a great way to put the performance of any gaming system in perspective.


The i7-5775C doesn’t work any miracles for Crysis 3 performance in our broad overview of its work in that title. It turns in the lowest average frame rate and highest 99th-percentile frame time of any chip here.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time our graphics card spends working on frames beyond certain frame-time thresholds, each with an important implication for gaming smoothness. Recall that our graphics-card tests all consist of one-minute test runs and that 1000 ms equals one second to fully appreciate this data.

The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS, or a 30-Hz refresh rate. Go lower than that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame. 11.1 ms correlates to 90 FPS.

To best demonstrate the performance of these systems with a powerful graphics card like the RTX 2080 Ti, it’s useful to look at our strictest graphs. 8.3 ms corresponds to 120 FPS, the lower end of what we’d consider a high-refresh-rate monitor. We’ve recently begun including an even more demanding 6.94-ms mark that corresponds to the 144-Hz maximum rate typical of today’s high-refresh-rate gaming displays.

By these metrics, the i7-5775C doesn’t get off to a great start. Crysis 3 tends to love every core and thread one can throw at it, but even then, the lowly i5-8400 slices about three seconds off the Broadwell chip’s time spent beyond 8.3 ms. The difference is even more stark when we click over to the 6.94-ms graph, where the i7-5775C trails every one of our modern CPUs by a wide margin.

Even if we look at the 16.7-ms mark, where the i7-5775C’s eDRAM is reputed to work miracles on real outlier frames, the wunderchip is at the back of the pack. It’s worth noting that we’re only talking about a couple of frames’ worth of roughness at this threshold, too—nothing the average gamer is likely to spot. Keep that in mind as we proceed through this article: the real differences in performance are the ones where we’re talking about significant fractions of a second worth of time or more in these buckets, not vanishing outliers.

 

Assassin’s Creed Odyssey

Ubisoft’s most recent Assassin’s Creed games have developed reputations as CPU hogs, so we grabbed Odyssey and put it to the test on our systems using a 1920×1080 resolution and the Ultra High preset.


Assassin’s Creed Odyssey seems to need lots of cores and threads to run well, but that fact doesn’t seem to hurt the i7-5775C as much as it did in Crysis 3. The Broadwell chip ties the i5-8400 for the lowest average frame rate, but its 99th-percentile frame time can’t quite match the entry-level Coffee Lake six-core. The i7-5775C does beat out our Socket AM4 Ryzen CPUs for 99th-percentile frame times, but still, this is hardly a performance miracle.


Starting our analysis at the 16.7-ms mark, the i7-5775C doesn’t hold up to its reputation as a uniquely capable smoother of roughness even compared to the Core i5-8400. Flip over to the 11.1-ms mark, and the 5775C spends more time past it than even our Socket AM4 Ryzen CPUs. The worst of the Ryzen chips’ frames might be rougher than the i7-5775C’s, but they’re still better at delivering sustained performance.

 

Deus Ex: Mankind Divided

Thanks to its richly detailed environments and copious graphics settings, Deus Ex: Mankind Divided can punish graphics cards at high resolutions and make CPUs sweat at high refresh rates.


Deus Ex: Mankind Divided might be the first title in which we see the i7-5775C’s reputation even somewhat reflected in our data. The 5775C can’t match the i5-8400’s average frame rate, but the two chips are neck-and-neck in our 99th-percentile measure of delivered smoothness.


If we dig into our time-spent-beyond-X data at the 16.7-ms mark, the i7-5775C does actually land near the top of our rankings. Even in the worst case of  we’re talking about a handful of frames’ worth of roughness in a sequence of many thousands. The 5775C’s advantage begins to fade as we look at the time spent past the 11.1 ms mark, though, and it’s gone completely if we use 8.3 ms as our threshold of interest. Modern Intel enthusiast chips spend less than half the time—and in the case of our ninth-gen Core parts, far less—than the numbers the i7-5775C puts up here. Again, nothing worth going out and buying this aging chip over.

 

Grand Theft Auto V

Grand Theft Auto V‘s lavish simulation of Los Santos and surrounding locales can really put the hurt on a CPU, and we’re putting that characteristic to good use here.


Grand Theft Auto V seems to care about single-threaded performance above all, so perhaps it’s no surprise that the relatively pokey i7-5775C can’t even catch AMD’s high-end desktop CPUs in this title.


The frame-time percentile graphs of these chips are all low and flat, and it’s not unpleasant to game on any of them. That said, the i7-5775C trails every chip here by a wide margin no matter what threshold you choose to look at.

 

Hitman

After an extended absence from our test suite thanks to a frame rate cap, Hitman is back. This game tends to max out a couple of threads but not every core on a chip, so it’s a good test of the intermediate parts of each processor’s frequency-scaling curve. We cranked the game’s graphics settings at 1920×1080 and got to testing.


Like we saw with Deus Ex, the i7-5775C can’t quite catch the i5-8400 in average frame rates when running Hitman. Its 99th-percentile frame time is still ever-so-slightly better than that of the entry-level Coffee Lake chip, though. I’m not convinced that’s a miracle worth gushing over, but it’s something.


Our time-spent-beyond-X graphs show why I’m not exactly over the moon for the Broadwell chip in this title. The i7-5775C doesn’t suffer from some bizarre and noticeable roughness exhibited by our ninth-gen Core chips at the 16.7-ms and 11.1-ms thresholds, but it trails all but the Ryzen 5 2600X when we begin looking at the more demanding 8.3-ms and 6.94-ms thresholds. The 5775C is performing fine, without a doubt, but it’s not pulling off any performance coups here.

 

Far Cry 5


Far Cry 5 is another game that seems to care more about memory latency and single-threaded performance than anything, and the i7-5775C still can’t seem to work any miracles compared to its modern mainstream-desktop brethren. The Core i5-8400 delivers both higher peak frame rates and a lower 99th-percentile frame time compared to the Broadwell chip, and even the Ryzen 7 2700X pulls ahead in both measures to some degree.


Our time-spent-beyond-X graphs provide some interesting counterpoints to our high-level overview of performance, though. Far Cry 5 sometimes experiences the occasional big, noticeable spike in frame times, and both the i7-9700K and i5-8400 seem to suffer from those spikes worse than most (although it’s curious that the i7-8700K and i9-9900K don’t). The i7-5775C seems to do a somewhat better job than most chips at keeping those hitches in check, but once we cross over to the 11.1-ms mark and below, the Broadwell chip starts falling as far behind as its lower average frame rate would suggest. Is the occasional zapping of a big frame-time spike of questionable replicability worth wedding yourself to an aging CPU and platform, given what we’ve seen so far from the i7-5775C? Probably not.

 

Conclusions


The Core i7-5775C holds up well enough in today’s most CPU-intensive games with today’s most powerful graphics card, but it’s hardly the miracle worker its reputation might suggest. This chip’s eDRAM may have allowed it to endure the passage of time better than the average CPU of its era, but it’s not slaying even the most affordable Coffee Lake six-core available now. The Broadwell part is still duking it out with AMD’s latest and greatest Socket AM4 chips, for what it’s worth, although the Ryzen 5 2600X and Ryzen 7 2700X still come out ahead in our final reckoning thanks to their multi-threaded prowess in a couple of titles.

In any case, there is no reason to spend $350 on a used i7-5775C plus whatever you’ll shell out for the aging DDR3 memory and Z97 motherboard you’ll need to run it. Any mainstream Intel CPU you can buy today for $200 or more will provide high-refresh-rate gaming experiences and 99th-percentile frame times superior to those of the Broadwell part, and AMD’s Socket AM4 Ryzens deliver similar gaming performance along with far more multithreaded grunt than the i7-5775C can muster.

Could Intel have made an even more compelling part for gamers over the past couple of years by manufacturing a socketed Skylake, Kaby Lake, or Coffee Lake part with embedded DRAM, especially with the promotion of the eDRAM from a victim cache to a more sophisticated member of the memory hierarchy in Skylake chips that have it? That’s impossible to say, since Broadwell remains the only socketed CPU with eDRAM that Intel has ever produced. Intel has much bigger and more fundamental fish to fry at the moment, so we may never find out how a more modern CPU and eDRAM play together. One can dream, though.

If you’re after the best 99th-percentile frame times in CPU-dependent gaming today, you can safely regard the i7-5775C as a collector’s item rather than an everlasting panacea for the frame-time spikes and stutters we despise. A Core i7-8700K or Core i7-9700K will run circles around the i7-5775C in both average and 99th-percentile FPS, dollar for dollar. Those chips have considerable advantages in single-threaded clock speeds and multithreaded bang-for-the-buck, too, and they run on modern platforms with modern memory and connectivity. Even though the pace of PC hardware innovations has slowed, the small improvements we’ve enjoyed from year to year have added up, and the i7-5775C’s unique virtues haven’t staved off erosion from the sands of time.

Comments closed
    • TrantaLocked
    • 7 months ago

    It would be nice to see an update with faster DDR3 and the i7-5775c overclocked to 4.2GHz. Anyone with a Z97 motherboard already has the ability to achieve a setup like this with their current motherboard, so showing what i7-5775c owners can actually do compared to the stock DDR4+8700K experience would be interesting and useful. And putting a 6700k in there for scale would be good.

    • crispysilicon
    • 10 months ago

    Well, I just registered just to comment on this.

    I run a 5775C and while it can be beat in terms of sheer horsepower, that is NOT what it’s for.

    My setup is as follows

    5775C @ 4ghz, eDRAM @ 2.2ghz (using h90 cooler) (delidded, heatsink corrected)
    HyperX Fury 1866 DDR3L @ 2ghz (memory controller will happily do 1.65V)
    7970ghz (yeah, it’s getting replaced soon)
    Sabertooth Z97 USB3.1 Mk1
    NVMe BPX Pro as boot drive (really nice drive btw)

    I use BOTH the iGPU and dGPU. dGPU for heavy games or compute. iGPU gets daily use/browsing/movies and light gaming/most emulation and compute.

    It idles at 800mhz.

    It lives 24/7 and I remote in and stream from it.

    The 5775C is best considered a mobile workstation chip.

    It will actually clock quite high…… but not with all the bell and whistles running (CPU features). But it SHINES at 4ghz, tastefully sipping from the socket.

    • plonk420
    • 12 months ago

    what voltage RAM did you use? i thought i read that this CPU wanted DDR3L 1.35v so i was a bit afraid to invest in it

    • Groove_C
    • 12 months ago

    The thing to consider is that all rather positive results were accomplished prior to Meltdown and Sepctre mitigations, mine inclusive.

    It would be really nice to see up to date OC results vs. other widespread CPUs.

    Especially in ArmA III.
    I would recommend to use “Altis Benchmark v0.6” (by Helo) rather than YAAB for more consistant results.

    Using CPUs with more cores/threads, even in/for games that don’t support that much cores/threads, results in better/more consistant frametimes, because all the background tasks/processes are spread more evenly between more cores/threads.

    This way you can have lower FPS in your games with a specific CPU, but a smoother gameplay because of more consistant frametimes vs. more FPS with a higher clocked CPU with less cores/threads but with more micro-stuttering because of less consistant frametimes.

    • Groove_C
    • 12 months ago

    [url<]https://i.imgur.com/zoI11J2.jpg[/url<] Here one can see that: i7-5775C (3.6/3.3 GHz core/cache DDR3 1600 MHz 10-10-10-27) 1 FPS ahead of i7-6700K (4.1/4.0 GHz core/cache DDR4 2400 MHz 14-14-14-34) 2 FPS behind i7-8700K (4.3/3.7 GHz core/cache DDR4 2400 MHz 14-14-14-34) 4 FPS behind i7-7740X (4.4/4.3 GHz core/cache DDR4 2400 MHz 14-14-14-34) i7-7700K (4.4/4.2 GHz core/cache DDR4 2400 MHz 14-14-14-34) + GTX 1080 ti

    • Alupang
    • 12 months ago

    Very interesting to revisit the i7 5775C! I was always baffled why after TR’s initial review of the 6700K (where 5775C > 6700K), the 5775C was omitted in (all?, most?) future CPU reviews/tests.

    One would think, that the best performing (for gaming) CPU at 6th gen Skylake launch time, would certainly be included in 7th Gen i7 7700K reviews. But nope! Same for 8th Gen 8700K. The best performing gaming CPU = 5775C is nowhere to be found in tests. Why?

    Also, the i7 5775C obviously shined brightest in games like Project Cars and Civilization series. TR’s 6700K review HIGHLIGHTED Project Cars in its tests, where we all learned 5775C > 6700K in both FPS & frametimes.

    Yet Project Cars is omitted in these tests. Why?

    Now I read you will retest the 5775C with proper OC and 2400 memory. Great news; thanks. May I politely request you include Project Cars & Civilization in your next tests? Please do 720p tests (or 1080p low) to remove the GPU out of the equation as much as possible.

      • drfish
      • 12 months ago

      I can’t speak for Jeff when it comes to what he’ll do next. However, I can explain why the 5775C wasn’t in the 7700K review. Jeff didn’t have it. Scott did the 6700K review but Jeff tested the 7700K and beyond. I think he only recently got his hands on a 5775C.

      As for why the other tests weren’t in this revist, that’s simply a matter of the fresh data that had just been generated with the other CPUs. Running one CPU through the same gauntlet for a gut check is a lot quicker than running every CPU through games that aren’t in the current suite, and Jeff didn’t have time to spare with the 12-core and 24-core Threadrippers on deck.

      FWIW, I’m just as stoked as anyone to see what happens next, and I’m really hoping Arma III gets worked into the mix again.

        • Alupang
        • 12 months ago

        Seems odd that Jeff would “review” the 7700K while omitting and completely ignoring the best performing gaming CPU known at that time. TR’s 6700K review clearly showed this, so ignorance cannot be an excuse.

        Was there even any mention of the 5775C in the 7700K review? Like an asterisk in the conclusion at the very least, explaining the 7700K review should be taken only with a grain of salt considering the best performing CPU known at that time was omitted?

          • Jeff Kampman
          • 12 months ago

          I didn’t physically have the chip dude

            • Alupang
            • 12 months ago

            With no disrespect and trying my best NOT to come off as being some kind of jerk here…

            Because it was known the 5775C outperformed the 6700K, TR really needed to have the chip for the 7700K review. And the 8700K review. And all the Ryzen reviews.

            TR isn’t the only site that completely ignored the 5775C in all post Skylake reviews. Nearly all USA sites did exactly the same.

            • Jeff Kampman
            • 12 months ago

            Look, I acknowledge that we should have tested the 5775C for completeness, but you can always argue for doing more testing. The problem is that we can’t devote unlimited time to reviews, and there’s always another article around the corner putting limits on what you can do in a given time frame.

            Back when we were testing the i7-6700K, the difference between that chip and the i7-5775C was positive but small—not some order-of-magnitude difference that would have suggested any long-term superiority for the Broadwell part. We didn’t get a chance to overclock the chip, which might have changed that outlook, but what’s past is past.

            Like I said in this article, the 5775C was an interesting curiosity because of its performance, but Z97 was already long in the tooth and the chip was really expensive and hard to get even when it was current. The hard truth is that there just aren’t that many 5775Cs out there. Had Broadwell hit the market sooner and at a lower price, maybe that story would have changed. Skylake and Kaby Lake were available at their suggested prices and in volume, and we have to consider that relevance when assembling systems under test.

            By the time we got around to the i7-7700K review, TR was in transition between CPU reviewers and not all of our chips made their way between testing facilities. I only have a 5775C today because we did ultimately get some older parts transferred this year, but it’s sat unused because we just haven’t been able to do much historical testing of late.

            We didn’t deliberately exclude the 5775C from our testing, in any case, and I’m happy to revisit it as I’m able (along with some other historical platforms). This piece was meant to be a quick look at performance that we could add on top of a fresh data set, not an in-depth retrospective. People are obviously interested in overclocked performance so I’m going to try and do a follow-up on that.

            • Alupang
            • 12 months ago

            Intel didn’t wish the 5775C to catch the gaming world by storm. Intel’s marketing roadmap was clear: Skylake & Kaby. Not Broadwell. We all know the 5775C, with its large L4 cache, was expensive to make and I’m guessing that Skylake parts are cheaper (thinner substrate bending) and more profitable for Intel.

            And that’s fine with me. Intel can do as they wish. Free country, free market.

            But what stinks to me, is “independent” review sites intentionally ignoring or skewing tests (GPU bottlenecking) on Broadwell seemingly to follow Intel’s marketing roadmap. With hind site, go back now and read Tomshardware & Anandtech’s reviews of Broadwell. The narrative was clear: Haswell owners should skip Broadwell and wait for Skylake.

            But Tech Report’s review of the Skylake 6700K, with that “pesky 5775C” (pesky?) included sure threw a monkey into Intel’s marketing wrench. I was blown away with TR’s results and it caused me to research further outside US shill sites for more real Broadwell data. It was Swedish Sweclocker’s non-GPU bottlenecked tests that confirmed TR’s eye opening results for me.

            And so, being a stuttering Pentium G3258 on Z97 owner, I purchased a new 5775C for my Pentium upgrade. And for that I have to sincerely THANK The Tech Report here.

            Of course I was very excited to read TR’s upcoming 7700K review. I wanted to see the 5775C smoking an equal clocked (or even +800mhz) 7700K in games like Project Cars & Civilization. Nope; denied. It appeared to me the 5775C’s plug was pulled from existence and that smelled very fishy to me. It appeared that TR decided to tow the company’s line in lock step with other US mainstream sites.

            I still believe the 5775C is more than an “interesting curiosity” and should not have been sat aside unused, except for “historical testing.” Is the 5775C still relevant today? For games like Project Cars & Civilization & Witcher 3, I still believe so.

            I hope you don’t take any offense when I throw down the gauntlet here. Test the 5775C @ 4.2 + L4 @ 2.0 + 2400 RAM. Be sure to include Project Cars & Civilization games in your tests. Let’s see how the 5775C plays in these games. The jury is still way out for me.

            We already know the 5775C @4.2 > 7700K @ 5.0 in Witcher 3 right?

            [url<]https://abload.de/image.php?img=untefuepmis8u.jpg[/url<] This is close to results from another site below. But note the Civilization VI result. Sadly however, Project Cars is omitted everywhere. [url<]https://i.imgur.com/S8xHNgW.jpg[/url<]

            • K-L-Waster
            • 12 months ago

            Perhaps you missed the part where he said “I didn’t physically have the chip” ?

            No nefarious marketing agenda — just unfortunate logistics.

            • Alupang
            • 12 months ago

            OK…

            But it begs the question:

            How would *you* explain the complete omission of the 5775C in 7700K & Ryzen & 8700K & 9700K & 9900K reviews from mainstream US sites like Toms and Anandtech then?

            Just unfortunate logistics again, and again, and again,…?

            • Groove_C
            • 12 months ago

            Then go and ask there – not here.

            • Alupang
            • 12 months ago

            You say K-L-Waster hangs out on forums there? TIL.

            • K-L-Waster
            • 12 months ago

            As it happens, no I don’t — but ragging on Jeff for the choices other reviewer sites make is offside.

            Information on a rare chip being scarce /= vast conspiracy to suppress the truth.

            • Alupang
            • 12 months ago

            That’s fine; I’ll be the whipping boy for this circle jerk.

            The chip wasn’t “rare” or scarce for years. TR included it in 6700K tests so they had one (only to be lost later). Tom’s certainly had one (at least until they started omitting it in all tests). Anandtech had one (at least until they started omitting it in all tests). Heck, I bought one from B & H Photo. It appears everyone that wanted one had one, until they either lost or omitted it in all tests.

            It follows then, that the 5775C was not rare. Lost or ignored yes, but certainly not rare.

            • K-L-Waster
            • 12 months ago

            For several months I looked for one on Newegg.ca after the original TR article as a possible upgrade over the i5 4670K I was running at the time. They never even had it listed, let alone in stock.

            Eventually I gave up looking.

            • Groove_C
            • 12 months ago

            You can find hundreds of them on AliExpress.com for 283$.
            I have bought mine there. Fully functional and have delidded it and overclocked without any problems.

            The only thing is that I had to wait 1 month for delivery. But it’s ok.

            • K-L-Waster
            • 12 months ago

            Oh, I’ve moved on since then — currently running an 8700K. I meant I looked for it back in late 2015 when the original article came out.

            • Alupang
            • 12 months ago

            That’s not the point here. But B&H Photo had 5775C in stock for years. Multiple Amazon sellers too. And all the review sites already had 5775C. The Tech Report had one too, and was the only US site to expose its superior performance > 6700K. I quote TR:

            “6700K is in the running for the fastest gaming CPU on the planet—and it would’ve won, too, if it weren’t for the pesky Broadwell 5775C and its magic L4 cache. ”

            Let’s call this the “pesky” review, shall we?

            After about the time of the pesky review, the 5775C was only rare in terms of US mainstream sites including it in reviews (or even mentioning it). And TR says today their 5775C did not make its way between testing facilities. TR didn’t bother to ensure the fastest gaming processor on this planet made its way between testing facilities. That to me means they ignored the 5775C.

            TR didn’t even mention the 5775C in 7700K & 8700K reviews, after they crowned it as “the fastest gaming CPU on the planet”. Not even an asterisk in the conclusion at the very least, explaining the 7700K review should be taken only with a grain of salt considering the best performing CPU known at that time somehow didn’t make it between testing facilities.

            Down vote me all you wish but my BS meter is pegged into the red.

            Let’s see some 5775C OC + 2400 ram tests with Project Cars (included in pesky review but not here) and Civilization. Not holding my breath.

            • K-L-Waster
            • 12 months ago

            So where are you going with this, exactly?

            Are you suggesting Intel has been pressuring TR to suppress the 5775C? But for some reason *hasn’t* been pressuring them to avoid giving Editor’s Choice awards to RyZen and ThreadRipper?

            • Alupang
            • 12 months ago

            Maybe not TR specifically, no. Too small fish.

            But Tom’s & Anandtech & LTT & & &? I would guess exactly that; yeah.

            Why TR ignored the 5775C after their excellent 6700K review, where they crowned the 5775C “the fastest gaming CPU on the planet”, really baffles me.

            Again, omission PLUS not even one word of King 5775C existence in 7700k & 8700k reviews is beyond my comprehension.

            • Krogoth
            • 10 months ago

            #1 – It is because Jeff didn’t have an unit on hand until recently

            #2 – The existing i5775c data on TR was on obsolete drivers and titles. It would have put it a poorer light.

            • Krogoth
            • 10 months ago

            5775C was rare and hard to find back in its heyday. Intel never made them large numbers and quickly shifted their CPU 14nm production towards Skylake.

    • enigma
    • 12 months ago

    I find this review quite interesting. I read the initial reviews of the 5775c and was a little bit disappointed, because in “normal” benchmarking, the processor does not perform any better by the additional cache.

    But soon some gaming-focused reviews showed that the cache gives a nice little advantage in some engines ([url=https://www.computerbase.de/2015-10/intel-core-i5-6500-5675c-4690-test/3/#abschnitt_spiele_720p<]in the range of 0-20% percent[/url<]). Overclocked at 4.2@1,26v was the perfect deal for a quite cool desktop and not as overhyped as a 4790K. I hoped that in the near future more games take advantage and bought that processor. According to your review the cache does not improve multi-threading-gaming-workloads at an unexpected high level. I must admit, that I hoped that 🙁 But as you said in your review, it DID age - but not too bad. So thats the least I excpected. It would be interesting to see how a 6700k oder 4790k aged compared to the 5775c 😉

    • moose17145
    • 12 months ago

    Would be interesting to see how the Broadwell-E’s are holding up. They have more cores available to them, along with 4 memory channels instead of just 2.

      • MOSFET
      • 12 months ago

      It would be interesting, but no eDRAM.

    • Groove_C
    • 12 months ago

    Jeff, thanks for revisiting this CPU.

    But it would have been better if you would have tested it overclocked.

    Because all people, who consciously have bought and continue to buy this CPU, to replace their 4670/4690/4770/4790K, have/will all overclock(ed) it.
    THE ONLY REASON to buy it, was/is because of its 128 MB L4 cache to not only help the frametimes but to raise the FPS in CPU limited games, that need(ed) high performance per core, rather than the number of cores.

    There was/is no interest to buy it for use at stock speed, especially considering the price.

    In games with high cores/threads count support it looses vs. Threadripper/Ryzen/7900X/8400/8700/9700/9900K
    Crysis is the wrong game to compare this CPU.
    In applications it also looses even to a higher clocked 4770/4790K.

    Also the L4 cache latencies are very low.
    And L4 cache bandwidth is comparable to DDR4.
    And you should add L4 cache bandwidth to the the DDR3 bandwidth.
    This way you have even higher bandwidth than any DDR4 kits.
    So using this CPU even with DDR3 1600 MHz is compensated much by L4 cache bandwidth.

    The CPU is overclockable to ~4.2 GHz (<1.4V) core and to ~3.6-3.8 GHz (<1.35V) cache and ~2.1 GHz L4 cache.
    L4 cache has its own voltage that one can’t raise/see. Just raise the frequency from stock 1.8 GHz to 1.9, 2.0 or 2.1 and see if it boots.
    L4 cache frequencies higher than 2.2 GHz have a negative impact on scores.

    It gave me 15 avg. FPS more in ArmA III vs. my i7-4790K @ 5.0/4.5 GHz core/cache @ DDR3 2400 MHz 10-12-12-31-2.

    Deus Ex: Mankind Devided
    [url<]https://abload.de/image.php?img=untdsubxlbs4u.jpg[/url<] Hitman [url<]https://abload.de/image.php?img=u6cuv5ibsex.jpg[/url<] Witcher 3: Wild Hunt [url<]https://abload.de/image.php?img=untefuepmis8u.jpg[/url<] Battlefield 1 [url<]https://abload.de/image.php?img=untitw2uoejasxv.jpg[/url<] Fallout 4 [url<]https://abload.de/image.php?img=unhsu2enlsnb.jpg[/url<]

      • Alupang
      • 12 months ago

      Close but: 8700K @ 5.1 > 5775C @ 4.2 > 7700K @ 5.0.

      5775C is such an amazing chip. I run mine with the tiny included Intel fan, because I can.

      Love to see Project Cars & Civilization tests. 5775C would be #1 I bet.

    • Jeff Kampman
    • 12 months ago

    FYI, folks, the amazing drfish is providing me with (among other things) a kit of DDR3-2400 memory so that we can plumb the effects of higher-speed RAM on our older Intel CPUs. Stay tuned for that and more.

      • MOSFET
      • 12 months ago

      This is great.

      • MrJP
      • 12 months ago

      Please throw in the i7-6700k if at all possible as well. Just out of interest in revisiting the contest in the original review of course, and nothing to do with the fact I have the 6700k and would selfishly like to see where it sits relative to the latest and greatest…

        • moose17145
        • 12 months ago

        Heck if we are throwing things into the wish list… I would like to request an i7-6900K riding in the 2011 V3 socket as well.

    • rawcode
    • 12 months ago

    Thanks for the review, glad I didn’t grab one a few months ago.

    I recently upgraded from a core i3-4150 to a i5-4690k, and that worked out well. When i was looking at options I briefly looked at the i5-5675C and the i7-5775c, however the premium they wanted on ebay for them was excessive. Instead I got the i5-4690k for 80 bucks and it was a decent upgrade.

    • kithylin
    • 12 months ago

    So you’re testing a processor with a stock maximum turbo speed of 3.6 ghz (all cores active) against modern processors with a stock maximum turbo speed of up to 4.4 ghz (all cores active) and up to 4.6 ghz with 2 cores active (8700K) and yet you didn’t think to overclock the 5775c to at least 4.0 ghz or 4.3 ghz to match the newer chips? Or at least down-clock the newer chips to match the stock speeds of the 5775c? You’re only showing us raw mhz differences here, which we can all understand just by reading the specifications of the different chips. Your results here actually do not show us anything what so ever. You absolutely need to match the clock speeds on all tested processors for this to be an actual correct test. Otherwise you’re just showing us the differences in clock speeds. This entire test is pointless and you all completely wasted your time both in testing and posting the review. Why did you even bother. Did you think we’re too stupid to realize?

      • Jeff Kampman
      • 12 months ago

      Thank you again for not reading the article.

      Since some people will apparently regard this response as too flip: we tested exactly what I meant to test, which was whether eDRAM helps keep the i7-5775C fresher than expected or fresh at all versus the best hardware of 2018 in the same testing regimen we’ve been using of late. We discovered that it doesn’t, and that’s all I was setting out to test.

      The goal of this article was not to synthetically determine whether eDRAM is still an advantage, all else being equal, because there’s no way we can truly make everything else equal (want to hook up a DDR4 IMC for me on this chip?) You can haul back clock speeds on newer CPUs if you want, but since that’s not how any real person will ever use their processors, that approach isn’t particularly interesting to me.

        • Redocbew
        • 12 months ago

        No good deed goes unpunished, right?

        • kithylin
        • 12 months ago

        And thank -YOU- for confirming what I already suspected: Techreport is incapable of actually reviewing any product correctly. I now know for the future to -NEVER- read any “review” on this website to actually discern any performance information between two different bits of hardware.

          • Krogoth
          • 12 months ago

          The review was trying to explore to see if L4 cache a.k.a 128MiB of eDRAM on i7 5775c still had an impact on modern games and if it would measure up to modern chips/platforms.

          It is quite evident this isn’t the case and it is most likely due to dual-channel DDR3 on Broadwell chip. It simply cannot keep up with its newer brethren that are riding on DDR4. eDRAM isn’t large enough or as fast as modern DDR4 DIMMs.

          This is precisely why Intel abandon eDRAM for future chips. It isn’t worth the transistor cost and performance benefits are dubious at best. It was nothing more than an experiment that end-up being an one-time fluke. It only gave performance benefit to Broadwell chip at time because dual-channel DDR3 was a well-known bottleneck on aggressively clocked Sandy Bridge-Broadwell chips. DDR4 alleviated this bottleneck.

          • DeadOfKnight
          • 12 months ago

          Your loss. TR benches are the best in the industry. Apparently you just can’t comprehend what this article was about. As an i7-5775c owner, I appreciate it. Yes, I’d prefer it if he included some overclocked results like he did with the 9700k, but I’m not one to complain about the work someone did for me at no charge.

          • Jeff Kampman
          • 12 months ago

          “You didn’t review the thing in the way I wanted” is not the same as “incapable of actually reviewing any product correctly,” sorry. We clearly came into this article with different questions and I’m sorry I didn’t answer yours, but that doesn’t mean our approach was incorrect.

            • MOSFET
            • 12 months ago

            He or she is 3 years late if they wanted the actual Broadwell desktop review. And that’s on Intel, not TR.

          • Andrew Lauritzen
          • 12 months ago

          “Correctly”? Dear lord where to begin…

          I’ll skip all that and just say there’s a reason why TR has a following of knowledgeable enthusiast and industry folks and it’s not because they just do reviews exactly like everyone else.

          This sort of thing, the original 5775c review, the frame time analysis and more are what keep us coming back. If you’re just interested in basics like what you’ve described there are plenty of other sites that provide that, but don’t mistake that for being more “correct”…

          • steelcity_ballin
          • 12 months ago

          You are exhausting. It must be really hard living a life where you can’t understand a simple concept that’s demonstratively true, and when proof of your ignorance is presented, you perform feats of mental gymnastics to avoid the obvious and simple truth.

          • torquer
          • 12 months ago

          You are fake news.

    • Arbiter Odie
    • 12 months ago

    Thank you for the update Jeff. I guess it is pretty low clocked, isn’t it? Big snazzy cache vs 1 Ghz more, and more IPC, and more cores…. hardly seems fair in hindsight lol

    • cheesyking
    • 12 months ago

    Still wouldn’t kick it out of bed for eating biscuits.

    • danbert2000
    • 12 months ago

    As a 5775c owner, I appreciate the re-review. I never quite bought into the hype around the processor, I just knew it made sense for me to jump from a 4690k to this because of heat concerns in my shoebox case. However, I feel like you missed a chance to actually get some interesting numbers here, as most people buying these chips years later are going to overclock them. Also, the fact you have 1866 RAM is a bit baffling. I know that this may be the “max speed” supported by the processor, but it would have been a much more interesting test to go against the newest, hottest chips with the 5775c’s best foot forward.

    If you have a chance, could you retest with something that an enthusiast might be able to reach easily, to give a better picture?

    1. 2400 MT/s RAM
    2. 4 GHz overclock at 1.25 V (4.2 is doable, but takes some voltages that most will avoid, something like 1.35 V)

    Essentially, I see the benchmarks here and they don’t mean a lot to me. I didn’t buy a Z97 board and a 5775c to run it at 3.6 GHz with bargain RAM…

      • DeadOfKnight
      • 12 months ago

      I got my chip to 4.2 on all cores with 1.328 V. It’s completely stable at this voltage, even on the most strenuous of stress tests like OCCT. Good temps with the AIO liquid cooler. Turning off HT drops temps by a few degrees and even raises FPS in many games as well. I know I could do better if I tried my hand at delidding. I did manage a stable overclock at 4.3, but I was not very comfortable with the much bigger jump in voltage and heat that required, so I backed it down to 4.2 and called it good.

      My RAM is 16GB 1833. I don’t know how much good it would do to upgrade it, but I’m willing to bet it’s not worth the price.

    • yeeeeman
    • 12 months ago

    Dear reviewer. I dare to say that you did all this work for nothing, because it could be summarized in one sentence.
    The short explanation is that we are in the multicore era now and most games use more than 4 cores.
    In 2015, when this CPU arrived, most games were still using the 2-3 cores that developers were using on Xbox 360/PS3. Now, with the 8 core PS4/Xbox One, most games can scale to and over 8 cores. So as we can see, even the 2600X six cores which is ~= IPC as broadwell/skylake, does better in most games.

      • Jeff Kampman
      • 12 months ago

      Dear commenter, thank you for not reading the article and our explanation of why this chip was specifically tested. Sincerely yours—

        • Krogoth
        • 12 months ago

        Not even Ice Lake is going to treat that burn.

        • torquer
        • 12 months ago

        This is the kind of stuff that bugs me. This commenter is an idiot but you needn’t take the time to respond and lower yourself to do so. Be an editor in chief or be one of us. Trying to be both diminishes the site and it’s value.

        I’m sure most won’t agree with me here but this is why I stopped paying for gold. It’s also why I don’t patronize Gamers Nexus anymore. I want reviewers and editors to always appear dispassionate and above petty squabbles. If I want to see that stuff I can go to the comments and muse over my own downvotes.

          • cphite
          • 12 months ago

          One of the things I really like about TR is that the reviewers and editors are just tech fans like the rest of us, and [i<]don't[/i<] consider themselves above the folks who read what they're writing. We call them out when they're wrong (which, it needs be said, isn't very often) it's only fair they be allowed to call us out when we are. And critiquing an article without actually reading it? That's just begging for it.

            • MOSFET
            • 12 months ago

            Well said, cphite.

          • derFunkenstein
          • 12 months ago

          I like public executions like this. The one surefire way to get a response from Jeff on the site (or on Twitter, for that matter) is to say something demonstrably false. If you try to join a conversation or ask questions, you’ll be met with radio silence. It’s not exactly new, though. Scott ran the site the same way. They’re just too busy to have fun with all of us in the comments.

          Let me make one thing clear, though: I’m not here for the personalities. I’m here for the testing methodology and reviews. As long as those keep coming, I’ll keep my subscription up.

            • torquer
            • 12 months ago

            Always fun to see the other guy get skewered. Until you’re the other guy.

            Scott didn’t do this stuff. Did we respect him less for it? I’m not sure I would have respected him more had he posted snarky responses. Hell, responding to idiots would be more than a full time job.

            This is a creative enterprise. If you can’t suffer a few fools and ignore the noise you’re just begging for a lot of unnecessary stress. I guess I don’t see idiocy as worthy of Jeff’s time (or anyone else’s).

            • derFunkenstein
            • 12 months ago

            It’s not but the human condition prevents us from doing it. We’re all broken in some way. I have my vices, you have yours. If someone publicly attacked my credibility (such that it is) it’d be hard to not respond.

            For example, I got jeered by a handful of folks for not dragging Ethernet down my hallway to test the Shield TV’s streaming stuff over a wired connection. It took all I had to temper my responses. And that was nothing compared to the two pages of notes full of text that needed fixing.

            All that to say I’m not skewering Jeff here. I know it’d tear me apart to do this for years and compound the number of morons. Lashing out is a human response. I’d probably do it even more.

            • K-L-Waster
            • 12 months ago

            Scott didn’t respond if someone disagreed with him, but I *do* remember him responding when someone accused him of writing something that he hadn’t written (usually of bias against company XYZ).

            • torquer
            • 12 months ago

            True enough, but I don’t think thats what happened here. Guy didn’t read the article, made some dumb statements. I guess in my world thats not really a scathing well reasoned critique that demands a response.

            I don’t dislike, hate, or disrespect Jeff. The man is intelligent, does quality work, and I kind of assume puts in long hours for not a lot of pay. I just feel like he’d be better off (and I’d personally appreciate) if he just ignored all the idiocy. Sure, if someone calls into question in some halfway reasoned post the ethics of the site by all means.

            But if some idiot just can’t comprehend English, let him be an idiot on his own. Just my opinion, no expectation to change anyone else’s. Hell I wouldn’t even say those who disagree are “wrong,” they just don’t see it the same way and thats alright.

      • Anonymous Coward
      • 12 months ago

      A one-sentence summary might suit modern children, but I rather prefer the review where it was actually tested and documented.

      • qmacpoint
      • 12 months ago

      Perhaps an explanation would be required: Core count doesn’t translate directly to gaming performance – Even Linus Tech Tips made that statement: [url<]https://www.youtube.com/watch?v=fBxtS9BpVWs[/url<]

      • Andrew Lauritzen
      • 12 months ago

      Is this where I again have to point out that “games can launch more than 4 threads” does not imply “games run faster on CPUs with >4 cores” again? 🙂

      It ultimately doesn’t matter how many cores the consoles have if their throughput remains so far below PC CPUs that the latter can time-slice all those unnecessary threads and still have time to fall into sleep states each frame…

      Trust me we’re still pretty deep in the realm of single-critical-thread being the rate determining step on PC. Granted it’s now determining rates well north of 100Hz in most cases which is indisputable progress, but the fundamental situation still remains.

    • ermo
    • 12 months ago

    Just came here to post that the picture with the CPU on top of what I assume to be a GPU cooler is freaking awesomesauce!

    Loved the article and am definitely interested in seeing more comparison stuff now that Jeff has resurrected his trusty Z97 setup.

    • strangerguy
    • 12 months ago

    The used prices 4790K/5775c are still so high because the Ebayers only think about “what is the fastest CPU I can put in my currently outdated socket” instead of “why should I even bother with a $330 used Intel 4C/8T when I can get a brand new 2600X/B450/16GB DDR4 for around the same price, perhaps even with less money if I sell my old Z97 and DDR3 kit?”

      • NTMBK
      • 12 months ago

      No chance you’ll get enough money from the second hand sale (especially after eBay takes their cut) to make that add up. And then there’s the price of the new Windows license, the hassle of tearing apart your entire system and rebuilding it from the ground up, reinstalling all of your software… Sometimes just getting a drop-in upgrade is a hell of a lot easier.

        • DoomGuy64
        • 12 months ago

        The biggest caveat I can think of though is that Intel changes motherboards for almost every CPU, so the amount of people chasing these chips is probably minimal. Especially after Spectre / Meltdown. You’re better off overclocking your old chip if you don’t want to upgrade the system.

          • psuedonymous
          • 12 months ago

          [quote<]The biggest caveat I can think of though is that Intel changes motherboards for almost every CPU[/quote<] Every other generation, in lockstep for the last decade.

    • NTMBK
    • 12 months ago

    Thanks for pushing down the price Jeff, so I can finally get one for my old Haswell system!

    • Nictron
    • 12 months ago

    Thanks Jeff,

    Appreciate the i7-5775c interrogation.

    So when I upgrade to a 2080Ti performance type card one day it will come with a full system refresh.

    For now I am quite content that the i7-5775c will run in the pack > 60 FPS quite happily.

    I see all Gaming tests were conducted at 1920×1080 and 144 Hz.

    What about one with 1440 and 4K testing?

    • ronch
    • 12 months ago

    Oh wow, I don’t recall seeing the FX being included in these scatter plots for a while now. Maybe I just wasn’t looking hard enough or everyone booted it out of their lives when Ryzen came out. No reason to buy them now unless you really want to or you just wanna collect old CPUs but I’m still on my FX-8350 and have no plans to upgrade unless the motherboard goes kaput. Got it on December 15, 2012.

      • Redocbew
      • 12 months ago

      Having just replaced a motherboard which did go kaput let me tell you it’s a lot easier to replace things on your own schedule than to let the death of a system schedule it for you.

    • djayjp
    • 12 months ago

    9600k… Pls

    • sdch
    • 12 months ago
    • Kretschmer
    • 12 months ago

    This is a creative idea for an article and one that is much appreciated. I’d also enjoy one that pit the i3s and Pentiums of the world against the latest and greatest.

    It seems odd to include the i7-7900X (which six people own) and not the i7-7700K, which was Q1 2017’s hotness.

      • MOSFET
      • 12 months ago

      I think the 7900X is there because the numbers were available from all the recent CPU testing, typically high-core, high-dollar stuff lately. It would be interesting to see a brief update later with perhaps i7-4790K, i7-6700K, and i7-7700K included. Maybe even an i3 from 7300 – 8350K, and possibly Pentium AE. I know, that’s a lot of repetitive work. (edit -sp.)

    • wingless
    • 12 months ago

    Crysis 3 is prejudiced against the 5775C! Those results almost make no sense.

      • Jason181
      • 12 months ago

      Crysis has always liked a lot of threads, and using a hyperthreaded cpu gains you about 30% rather than 100%. So even though Crysis was likely using all 8 threads of the 5775C, that’s only about 5.2 actual cores (4 cores @ 100% plus 4 cores @ 30%).

      Even if the ipc and clockspeed of the two chips was the same, the 6-core cpu should have a significant edge. It would be interesting to find out the average clockspeed of the two chips during that test so we could do a rough calculation of the advantage that edram provided, if any.

      I know we’d have to take into account the change in ipc, but I assume it wouldn’t be that hard to find the effective difference between the architectures in general, and maybe even with Crysis specifically.

    • DPete27
    • 12 months ago

    [quote<]reliable NVMe boot (which has never worked consistently on our Z97 testbed)[/quote<] Wait....is this a thing? I was planning on getting my brother an NVMe SSD for his Gigabyte Z97-MX Gaming5 for Christmas.

      • weaktoss
      • 12 months ago

      That board’s M.2 slot is almost certainly only fed by two PCIe lanes, not four. You might also want to give him a PCIe add-in card with an x4 M.2 slot (this is what we used in the old storage test rig).

      He should be able to use an NVMe drive as secondary storage regardless, as long as he’s on a reasonably modern operating system. But Z97 NVMe boot support is a bit of a crapshoot. Our Asus board officially supported it after some BIOS updates, but even then there were a handful of drives that it just wouldn’t recognize.

      You could think about getting him a SATA M.2 drive instead, it would mean less headaches.

        • DPete27
        • 12 months ago

        To be clear, this would be intended as an OS/boot drive.

        I’m not concerned about the x2 M.2 slot (which I agree is most likely x2 and not x4) since even if it’s PCIe2.0 from the chipset that would give it 1,000MB/s max, or 1970MB/s if it’s indeed PCIe3.0. That’s plenty of bandwidth, and certainly more than any SATA SSD is going to offer.

        I guess I’ve got some time to search the net to see if Gigabyte boards have similar issues as your Asus board. Thanks for the heads up.

          • AutoAym
          • 12 months ago

          From my experiences as stated earlier:
          If you’re getting an m.2 SATA SSD, the onboard m.2 slot will work just fine for either boot or secondary.
          If it’s an m.2 NVMe SSD that’ll use 4 PCIe lanes? i’d strongly consider getting a riser card. It’s like having a couple of extra notches in your belt at Thanksgiving Dinner.

          Regarding UEFI between Gigabyte/Asus in my experience?
          GENERALLY SPEAKING (understanding there’s always exceptions to the rule):
          ASUS = more features/overclocking support
          Gigabyte = more stable

          • Chrispy_
          • 12 months ago

          It’s not just the board vendors at fault either.

          I’ve had to send back NVMe drives because they weren’t bootable, even on a modern x399 platform. No manner of BIOS update fixed the issue, only a different SSD. IIRC it was WD drives not being seen at boot, whilst Kingston/Samsung worked.

      • AutoAym
      • 12 months ago

      Not a thing.
      At least, not with the Z97-MX Gaming5 i’ve been running for the last 4+ years.
      It’s been perfectly happy to boot Win8.1/10 off of a Samsung 950 Pro from both the onboard m.2 slot (w/2 PCIe lanes) and the $20 PCIe x4 riser card I bought a year later to get moar lanes.

      • Krogoth
      • 12 months ago

      It is just part of the headaches that are commonplace for products that have transitional features.

    • DeadOfKnight
    • 12 months ago

    It’s definitely not worth going on eBay and buying overpriced, second-hand components for today, but it’s clear from these graphs that it’s still a chip capable of playing with the big boys. There’s definitely nothing compelling about upgrading from the i7-5775c just for gaming. If you got one of these chips after reading the review back in 2015, you definitely got your money’s worth in terms of longevity, and apparently resell value.

      • kuraegomon
      • 12 months ago

      With that resale value, I’d sell the hell out of that chip – and the motherboard and RAM too, obv – and get into a 9700K or 9900K stat 🙂

      Recent vintage DDR4, plus the platform advantages, plus the additional general-purpose performance? And, most importantly, resetting the clock on when you need to get your next system from 3 years away to probably 8 years away == easy choice. I wouldn’t recommend it for most chips of that vintage, but that inflated resale price tips the balance fo’sho’.

        • ozzuneoj
        • 12 months ago

        Yep, I did this with my Q9550+P45 back in 2011 and basically got my 2500K system for free. Stupidly high resale value on some items makes for a very easy upgrade decision. Sadly, my 2500K isn’t worth enough now to bother trying to sell it. When I upgrade this system I’ll probably keep all of this hardware as a backward compatibility machine since it will run basically any operating system, it has conventional PCI slots and with my GTX 970 it’ll even maintain native VGA support (I collect CRTs).

        I have to use those excuses to justify keeping it, since I’ve grown attached to it after 8 years. I’m almost glad it isn’t worth enough to give me a cheap upgrade option… 😉

    • Wirko
    • 12 months ago

    14 nanometers, you said? This chip is absolutely up-to-date.

    • DeadOfKnight
    • 12 months ago

    Have you tried overclocking it? My i7-5775c has a 24/7 overclock at 4.2 GHz. This was part of the appeal of buying it, as it’s an unlocked processor that could perform even better than the results TR found back then. I would be interested in seeing how much of a difference that makes.

      • Freon
      • 12 months ago

      An article including a few “legacy” processors overclocked with reasonable settings and GPUs might be nice. Lots of people with Sandybridge, Ivybridge, and Haswell (mk1 or mk2) are trying to figure out if an upgrade is worth it.

        • ozzuneoj
        • 12 months ago

        I’d like to see this too. In the mean time, you may want to check this one out:
        [url<]https://techreport.com/discussion/31410/a-bridge-too-far-migrating-from-sandy-to-kaby-lake#metal[/url<] This article and several of the comments offer some insight on the subject, but it's nowhere near as indepth as a usual TR review. I really would love to see a couple of benchmarks with older systems. It'd be interesting too if people could point out some specific games or tasks that noticeably stutter, run slowly or load slowly on older systems, and have these situations compared to a few newer systems (recent i5, Ryzen, recent i7).

    • NovusBogus
    • 12 months ago

    One of Broadwell’s advantages is that it’s the last CPU architecture fully compatible with Windows 7. This is probably even more of a factor in the prices being what they are than its reputation among PC enthusiasts.

    • derFunkenstein
    • 12 months ago

    On the GTA V page, the button for the 5775C re-loads “Grand_Theft_Auto_V_frametime_plot_9900K.png” instead of the chart for the 5775C, resulting in me writing a long rant. I cleared the rant after I looked at the file names and refreshed the page to see the 5775C’s chart prior to clicking buttons. 😆

    Edit thanks for correcting it. 🙂

    • valkrys
    • 12 months ago

    I’m curious as to how you find the 99th percentile figure, that or the “beyond XXms” numbers. There seems to be some inconsistencies here in the data. As an example (seems to be the case on all cpus and tests listed) in Deus Ex the 2700x is listed as having a 99th percentile frame time of 13.2ms/75fps 1% low. But if you look at the data for time spent beyond. It spends less than 1 second beyond 11.1ms / below 90fps.

    Calculating against your total frames delivered and average framerates I can see you’re running 1 hour long benchmarks meaning the real time below 90fps is less than 0.003% of the entirety of the benchmark. I’m not disputing your averages. Just pointing out those particular figures don’t come close to adding up and was wondering if you could clarify. Thanks.

    • SuperPanda
    • 12 months ago

    Neat article, thanks for revisiting this.

    It’d be interesting to see the importance of clockspeed independent of platform enhancements. At the time the “point” or “value” of the 5775c and its eDRAM was that it allowed a lower-clocked chip to perform like a higher clocked CPU with similar IPC and platform speeds. Basically it was “sometimes eDRAM == clockspeed”.

    I know you’ve got the 9700K and OC’d 9700K, but with Turbo being what it is those are kind of hard to interpret directly. A few runs of fixed-clock over/under-clocked 9700K could be more illuminating, as it’d more directly indicate how important clockspeed is, especially if contrasted with a fix-clocked 5775c. That might indicate if Intel could release a 10nm desktop chip with eDRAM to overcome 10nm’s expected clock deficiencies, similar to the way the 5775c’s clock was hampered by 14nm’s deficiencies.

    You know, if you’ve got nothing better to do. 😉

      • Krogoth
      • 12 months ago

      Considering the yielding issues with 10nm process at the moment. I doubt that Intel is going to be opting for eDRAM again until the 10nm process is fairly mature and proven.

    • Krogoth
    • 12 months ago

    I suspect that DDR3 is what holding back 5775C. Newer games are more sensitive to memory bandwidth than the titles that existed during 5775C’s heyday. DDR4 is the largest factor for Skylake’s bump over Haswell and older chips.

      • tsk
      • 12 months ago

      Nah fam.

        • derFunkenstein
        • 12 months ago

        Well, back in 2015 it would hang with Skylake which was running DDR4-2400 memory, but DDR4 memory bandwidth has greatly increased since then. There’s a lot more available to the CPU nowadays. But we don’t know if that’s the cause or not, since there doesn’t happen to be any CPU with four Sky/Kaby/Coffee Lake cores and hyper threading to compare to. Having an i7-6700K or 7700K in this test would have answered that question.

          • Airmantharp
          • 12 months ago

          We were running DDR4-3200+ back then too…?

            • derFunkenstein
            • 12 months ago

            Ok so I didn’t read the review recently but I didn’t think it was available then.

            Still, without including a similarly-configured DDR4 platform we don’t know but maybe it’s closer to those than we give it credit. At the time it was basically just as fast.

            • derFunkenstein
            • 12 months ago

            -1 for admitting I was wrong. Thanks Tech Report! 😆

    • Takeshi7
    • 12 months ago

    I wish the i7 4790k was included in the numbers just for comparison.

      • DancinJack
      • 12 months ago

      [url<]https://techreport.com/review/28751/intel-core-i7-6700k-skylake-processor-reviewed/6[/url<] Obviously it's not exactly the same games, but you still get an idea.

    • Mr Bill
    • 12 months ago

    That was an interesting read. I really like it when TR investigates an issue. Especially one I was not even aware existed.

    I wonder if optane memory on core (like the server CPU’s) is similar.

    • Chrispy_
    • 12 months ago

    Man, I love it when TR does articles like this 🙂

    I’d love to see a 2500K in those charts, and/or the nearest comparison without the L4 cache – like a 4770K.

      • Acidicheartburn
      • 12 months ago

      Same. I’m curious how my 4770k holds up. I’m sure it’s plenty fine and I don’t feel the need to upgrade. It still feels like a screaming monster in my system.

        • Voldenuit
        • 12 months ago

        As always, it depends on your individual needs and uses.

        I finally upgraded my 4790K to an 8700K recently, and the 8700K finishes video encodes in half the time the 4790K did, especially if I’m doing cuts/edits/effects/grading. Well, no surprise since it has more cores (although it’s notable it has less than 2x the cores), but if I were only gaming, I doubt I’d have noticed any difference between the two.

    • techguy
    • 12 months ago

    I think you ought to include results from CPUs that were tested the last time you explored this topic, to see if perhaps Spectre/Meltdown mitigations (which are known to have a disproportionate affect on older chips) are contributing to these poor results. That or find an old mobo with an old BIOS and include re-test.

    Downvoted for asking for more info. Please, provide a counterpoint as to why my query is “wrong”.

      • Krogoth
      • 12 months ago

      Mitigations mostly affect performance in real-world, performance applications (mostly I/O throughput). They have little or to no impact on majority of gaming titles even on really old chips.

        • techguy
        • 12 months ago

        I still haven’t seen anyone test older chips with these mitigations enabled, not in gaming workloads anyway. I would just like to see why the fps/dollar graph now has the 5775c at the bottom of the chart instead of the top. When this topic was originally investigated, the 5775c beat out the 6700k, now it loses to the 8700k and other modern chips. I’m perfectly willing to accept this (the data is what it is), I just want to know why.

          • Krogoth
          • 12 months ago

          There are several tests out there in the field done when Meltdown/Spectre was big news.

            • techguy
            • 12 months ago

            Find them and link them then. When I searched there were none that included testing of older chips.

            • techguy
            • 12 months ago

            Don’t see a response to this. Please, feel free to backup your statement. I’m happy to look at the data. That’s all I’m asking for here, more data.

            (referring to the quoted post #13 by Krogoth to which my post #16 was a response)

            • Redocbew
            • 12 months ago

            Krogoth is not impressed enough to offer further replies.

            • Klimax
            • 12 months ago

            You are not going to get any further, because just basic search will find them. You already know all necessary keywords…

      • Redocbew
      • 12 months ago

      There’s also been a few years of OS patches and driver updates since the last time the i7 5775c was tested. Windows 10 wasn’t even released until a month after this chip had launched.

      Even if Spectre and Meltdown had never happened those results while entirely accurate at the time just can’t be compared to the results we’d get today.

    • chuckula
    • 12 months ago

    Only AMD makes Fine Wine!

      • LoneWolf15
      • 12 months ago

      Meant specifically to pair with Chuckula Cheese.

Pin It on Pinterest

Share This