Intel’s Core i9-7980XE and Core i9-7960X CPUs reviewed

There’s never been a better time to be a high-end system builder. Intel’s Skylake Server core made its way onto uber-desktops back in June as the Skylake-X family of chips, and AMD returned the serve with its Ryzen Threadripper CPUs and the X399 platform. Now it’s Intel’s turn to raise the stakes again.

On the bench today, we have the 16-core, 32-thread Core i9-7960X and the newest Extreme Edition CPU: the 18-core, 36-thread Core i9-7980XE. The Core i9-7960X is a core-for-core, thread-for-thread match against the Ryzen Threadripper 1950X, while the Core i9-7980XE lays claim to what is perhaps the highest core and thread count available in a “consumer” CPU today.

Of course, neither of these chips come cheap. The Core i9-7980XE lives up to its Extreme Edition lineage with an eye-popping $1999 sticker price, and the Core i9-7960X isn’t far behind at $1699. These price tags put the highest-core-count Skylake-X CPUs in a somewhat uncomfortable spot for a couple reasons. For one, that kind of money for a CPU is well within workstation-class territory, but neither of these CPUs do anything to address Threadrippers’ higher CPU PCIe lane complement or ECC RAM support. Recall that Threadripper CPUs and the X399 platform have both ECC RAM support and 60 PCIe 3.0 lanes directly connected to the CPU. Those resources are available from every  Threadripper, too.

In an apparent response to AMD’s aggressive marketing of its platform advantages, Intel has begun aggregating the number of PCIe lanes available from both the chipset and CPU in its marketing materials. That aggregation is a bit disingenuous, though, because it doesn’t account for the fact that PCIe lanes from the X299 chipset have to traverse the DMI 3.0 link and its roughly 32 Gbps of bandwidth before reaching the host CPU. Some X399 peripheral controllers do need to travel over a similar link on Threadripper systems, but the jockeying for bandwidth from chipset to CPU should be a lot less rowdy there. Even if Threadripper CPUs don’t outperform the Skylake-X competition, the robustness of the X399 platform for workstation-class uses remains a point in its favor.

A block diagram of the Skylake Server core. Source: Intel

Although we’ve already discussed the Skylake Server architecture in detail in our review of the Core i9-7900X, the implementation of that architecture in the Core i9-7960X and i9-7980XE is worth exploring a bit more.

It’s no secret that Intel has long repurposed server hardware for its high-end desktop processors. The company has made multiple Xeon dies with varying core counts to fit the needs of the businesses it serves, but until now, the company has never had to repurpose its higher-core-count Xeons for duty on the desktop.

We weren’t briefed on the various Xeon Scalable Processor dies for the Skylake Server rollout this time around, but this Tom’s Hardware report leads us to conclude that the Core i9-7960X and Core i9-7980XE are bringing Intel’s high-core-count (or HCC) Skylake Server die down from the data center. Other Skylake-X Core i7s and Core i9s use the 10-core low-core-count, or LCC, die as their foundation. This may be the first time that Intel has ever had to bring its HCC Xeon die to its high-end desktop platform. Competition is a wonderful thing.

Like all Core i9 CPUs, these high-core-count chips boast two AVX-512 execution units: a dedicated AVX-512 unit per core on port five of the unified scheduler, and another created through the fusion of the two AVX-256 units on ports zero and one of the unified scheduler. Recall that the Core i7-7800X and Core i7-7820X are only equipped with one AVX-512 unit per core: the one created through fusion of the dual AVX-256 units. The dedicated AVX-512 unit on port five is disabled on those chips for market-segmentation reasons. Some users have reported that both AVX-512 execution paths are available even on Core i7 products, but we’re reporting the official line until we’ve had time to do some directed testing.

Model Base

clock

(GHz)

Turbo

clock

(GHz)

Turbo

Boost

Max 3.0

clock

(GHz)

Cores/

threads

L3

cache

PCIe

3.0

lanes

Memory

support

TDP Socket

Price

(1K

units)

i9-7980XE 2.6 4.2 4.4 18/36 24.75 MB 44 Quad-channel

DDR4-2666

165W LGA 2066 $1999
i9-7960X 2.8 16/32 22 MB $1699
i9-7940X 3.1 4.3 14/28 19.25 MB $1399
i9-7920X 2.9 12/24 16.5 MB $1199
i9-7900X 3.3 4.5 10/20 13.75MB 140W $999
i7-7820X 3.6 8/16 11MB 28 $599
i7-7800X 3.5 4.0 N/A 6/12 8.25MB Quad-channel

DDR4-2400

$389
i7-7740X 4.3 4.5 N/A 4/8 8MB 16 Dual-channel

DDR4-2666

112W $339
i5-7640X 4.0 4.2 N/A 4/4 6MB $242

With the release of 12-, 14-, 16-, and 18-core CPUs, the Core i9 lineup is complete. You can see that the highest-end CPUs get a 25W TDP bump over the Core i9-7900X and friends, to 165W. Even with the more generous TDP, some have expressed concern about the clock speeds these higher-core-count Core i9s can hit under load. Happily, my experience with this duo suggests Intel’s 2.8 GHz base clock for the i9-7960X and 2.6 GHz base clock for the i9-7980XE are extremely pessimistic for enthusiast desktops with adequate cooling. Intel rates the Core i9-7980XE for 3.4 GHz non-AVX Turbo operation with all cores active, and I can confirm that the chip can hold that speed under a 280-mm liquid cooler. The i9-7960X is rated for an all-core Turbo speed of 3.6 GHz. Here’s the full Turbo Boost 2.0 table for each Skylake-X CPU, straight from the horse’s mouth:

Number of

cores

active

1

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
i9-7980XE 4.2 4.2 4.0 4.0 3.9 3.9 3.9 3.9 3.9 3.9 3.9 3.9 3.5 3.5 3.5 3.5 3.4 3.4
i9-7960X 4.2 4.2 4.0 4.0 4.0 3.9 3.9 3.9 3.9 3.9 3.9 3.9 3.6 3.6 3.6 3.6 X X
i9-7940X 4.3 4.3 4.1 4.1 4.0 4.0 4.0 4.0 4.0 4.0 4.0 4.0 3.8 3.8 X X X X
i9-7920X 4.3 4.3 4.1 4.1 4.0 4.0 4.0 4.0 3.8 3.8 3.8 3.8 X X X X X X
i9-7900X 4.3 4.3 4.1 4.1 4.0 4.0 4.0 4.0 4.0 4.0 X X X X X X X X
i7-7820X 4.3 4.3 4.1 4.1 4.0 4.0 4.0 4.0 X X X X X X X X X X
i7-7800X 4.0 4.0 4.0 4.0 4.0 4.0 X X X X X X X X X X X X

Typical AVX workloads (albeit not AVX-512) caused the i9-7980XE to fall to just 3.2 GHz per core, but as our performance results will show, that drop hardly matters in the big picture. Regardless, I wouldn’t worry about seeing clock speeds under 3 GHz outside of intensive AVX-512 workloads. Given the paucity of programs using those code paths, the average enthusiast shouldn’t have any clock-speed worries at stock speeds.

Like most other Skylake-X CPUs, the i9-7960X and i9-7980XE offer an improved Turbo Boost Max 3.0 implementation compared to Broadwell-E. On these high-core-count CPUs, one should see up to 4.4 GHz speeds on two favored cores.

Intel’s rebalancing of the cache hierarchy on Skylake Server chips means the i9-7960X and i9-7980XE have massive private L2 caches at their disposal. Each core gets 1MB of L2 to work with, for a total of 16MB on the i9-7960X and 18MB on the i9-7980XE. Recall also that the bandwidth between the L1 and L2 caches on these chips has been increased to 128 bytes per cycle for reads and 64 bytes per cycle for writes. At the same time, the L3 cache per core now serves as a victim cache for the L2 above it, and L3 per core has been cut to 1.375 MB. Contrast that with the 2.5 MB of shared L3 per core on Broadwell Xeons. The new L3 allocation leads to 22MB of L3 across all cores on the i9-7960X and 24.75 MB on the i9-7980XE.

Now that we’ve revisited the essentials of Skylake-X, it’s time to get to testing.

 

Our testing methods

As always, we did our best to deliver clean benchmarking numbers. We ran each benchmark at least three times and took the median of those results. Our test systems were configured as follows:

Processor
AMD Ryzen Threadripper 1950X AMD Ryzen Threadripper 1920X
CPU cooler= Thermaltake Water 3.0 Ultimate 360-mm liquid cooler
Motherboard Gigabyte X399 Aorus Gaming 7
Chipset AMD X399
Memory size 32GB
Memory type G.Skill Trident Z DDR4-3600 (rated) SDRAM
Memory speed 3600 MT/s (actual)
Memory timings 16-16-16-36 2T
System drive Intel 750 Series 400GB NVMe SSD

 

Processor
Intel Core i9-7980XE Intel Core i9-7960X Intel Core i9-7900X Intel Core i7-7820X
CPU cooler Corsair H115i 280-mm liquid cooler
Motherboard Asus Prime X299-Deluxe
Chipset Intel X299
Memory size 32GB
Memory type G.Skill Trident Z DDR4-3600 (rated) SDRAM
Memory speed 3600 MT/s (actual)
Memory timings 16-16-16-36 2T (DDR4-3600)
System drive Samsung 850 Pro 512GB

 

Processor
Intel Core i7-6950X
CPU cooler Cooler Master MasterLiquid Pro 280 280-mm liquid cooler
Motherboard Gigabyte GA-X99-Designare EX
Chipset Intel X99
Memory size 64GB
Memory type G.Skill Trident Z DDR4-3200 (rated) SDRAM
Memory speed 3200 MT/s (actual)
Memory timings 16-18-18-38 2T
System drive Samsung 960 EVO 500GB

They all shared the following common elements:

Storage 2x Corsair Neutron XT 480GB SSD

1x HyperX 480GB SSD

Discrete graphics Nvidia GeForce GTX 1080 Ti Founders Edition
Graphics driver version GeForce 385.69
OS Windows 10 Pro with Creators Update
Power supply Seasonic Prime Platinum 1000W

Our thanks to Intel and AMD for all of the CPUs we used in our testing. Our thanks to AMD, Intel, Gigabyte, Corsair, Cooler Master, and G.Skill for helping us to outfit our test rigs with some of the finest hardware available, as well.

Our X299 test rig

Some additional notes on our testing methods:

  • Unless otherwise noted, we ran our gaming tests at 2560×1440 at a refresh rate of 144 Hz. V-sync was disabled in the driver control panel.
  • For our Intel test system, we used the Balanced power plan, as we have for many years. Our AMD test bed was configured to use the Ryzen Balanced power plan that ships with AMD’s chipset drivers.
  • All motherboards were tested using the most recent firmware available from the board vendor, including pre-release versions provided exclusively to the press where necessary.
  • All available Windows updates were installed on each test system before testing commenced. The most recent version of each software application available from each vendor was used in our testing, as well.

Our testing methods are generally publicly available and reproducible. If you have questions, feel free to post a comment on this article or join us in the forums.

 

Memory subsystem performance

Let’s kick off our testing with a quick look at main memory performance. Both the Ryzen Threadripper CPUs and every Core i9 in this test are running with the same DDR4-3600 kit, so we can easily make apples-to-apples comparisons about their performance using AIDA64’s built-in memory benchmarks.

Intel’s memory controller on the high-core-count Skylake-X die seems to favor read performance over writes.  Given the potential hunger for data of these chips’ AVX-512 units, that’s probably the right balance to strike. Copy bandwidth is slightly higher than on the low-core-count chips, but only slightly. The Ryzen Threadripper duo beats out the Skylake-X CPUs in memory writes and copies, but read bandwidth lags behind Skylake-X chips of equal or higher core counts.

Even though they have many more cores and threads—and thus a broader mesh to traverse—than low-core-count Skylake-X chips, the i9-7960X and i9-7980XE deliver memory access latencies in line with their less resource-endowed cousins. That’s a testament to the scalable nature of the Skylake Server mesh architecture.

Some quick synthetic math tests

To get a quick sense of how these chips stack up, we turn to AIDA64’s synthetic benchmarks. Photoworxx stresses the integer SIMD units of these chips with AVX. FPU Julia tests single-precision floating-point throughput and uses AVX instructions (though not AVX-512), while FPU Mandel puts those same instructions to work in the service of double-precision throughput.

Hm. It seems Photoworxx may not be fully optimized for CPUs with this many cores. Perhaps the benchmark will be optimized for these chips in the future, at which point we’ll have to retest.

As we’d expect, throwing more cores at the AIDA64 Hash benchmark produces basically linear increases in bandwidth for our Skylake-X chips.

Ryzen Threadripper chips do outpace the Intel competition in this benchmark, but that’s because the Zen architecture has what seems to be little-publicized support for Intel’s SHA Extensions. These extensions permit hardware acceleration of some of the SHA family of algorithms, and CPU Hash uses SHA-1 as its algorithm of choice. SHA-1 isn’t particularly useful in practice any longer, but SHA-256 is, and the folks at SiSoft report similar speedups for that algorithm. AVX implementations of other SHA versions might help Intel processors close the gap, though.

Thanks to their large complements of wider AVX units compared to Threadripper CPUs, the i9-7960X and i9-7980XE come close to doubling the throughput of the 1950X in the Julia test, and they still maintain a healthy lead in the Mandel test. That’s excellent performance. Now, let’s see how these chips handle games.

 

Hitman (DX12)
Hitman‘s DirectX 12 renderer can stress every part of a system, so we cranked the game’s graphics settings at 1920×1080 and got to testing.


Hitman‘s DX12 mode favors lots of cores, but performance seems to regress once we move beyond 10 cores or so. Neither the i9-7960X nor the i9-7980XE can top the i7-7820X, and the AMD CPUs fall behind their many-core Intel counterparts. Despite their high average frame rates, however, the i9-7960X and i9-7980XE fall toward the back of the pack in our 99th-percentile frame time measure of delivered smoothness. Only the i7-7820X does worse, if a 99th-percentile frame time of 16.2 ms can even be called bad to begin with.

That 99th-percentile frame time  weirdness for the i7-7820X is a theme of this review. You will want to pay attention to the chip’s frame-time plots in the graphs above. For reasons we still haven’t been able to crack, this chip exhibits its own particular brand of stutter that’s clearly evident in frame-time graphs. We can analyze just how much of our one-minute test run those little spikes occupy using our advanced “time-spent-beyond” metrics, so let’s get to it.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time our graphics card spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS, or a 30-Hz refresh rate. Go lower than that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame. 8.3 ms corresponds to 120 FPS, an even more demanding standard that fans of high-refresh-rate monitors will want to pay close attention to. Finally, we’ve recently begun including an even more demanding 6.94-ms mark that corresponds to the 144-Hz maximum rate typical of today’s high-refresh-rate gaming displays.

Even though its frame-time plot is spikier than we’d like, the i7-7820X’s weirdness shows up as less than a tenth of a second spent beyond 16.7 ms. Even at the more demanding 8.3-ms mark, the i7-7820X’s spikes add up to just about a second and a half of our one-minute test run. Both the Core i9-7960X and i7-7980XE do worse, and the Ryzen 7 1800X is far behind.

Even though we’d prefer not to see the i7-7820X’s spikiness at all, our time-spent-beyond-X graphs suggest those spikes only represent a tiny proportion of frames rendered. I certainly didn’t notice hitchiness or other unpleasantness while gaming on the chip in Hitman. Let’s see whether that continues to be the case.

 

Deus Ex: Mankind Divided (DX11)

With its rich and geometrically complex environments, Deus Ex: Mankind Divided can prove a challenge for any CPU at high enough refresh rates. We applied our preferred recipe of in-game settings to put the squeeze on the CPU and got to it.


Deus Ex gives all of our Intel CPUs a chance to shine versus the red team’s competitors. Neither the i9-7960X or the i9-7980XE take the top spot, but they still deliver higher performance than AMD’s chips do. The Core i7-7820X’s spikiness continues to be bad enough to drop its 99th-percentile frame time performance back with the Ryzens, even though its average-FPS figure suggests a fine experience.


None of our contenders leave the graphics card waiting for work long enough to cause substantial time spent beyond 16.7 ms. Even at the 8.3-ms threshold, the i9-7960X, i9-7980XE, and the i7-7820X all hold up the graphics card for about half as long as the Ryzen chips do. That fine performance continues at the 6.94-ms mark. Even with its unusual spikiness, the i7-7820X is hardly delivering a bad time.

 

Crysis 3

Although Crysis 3 is nearly four years old now, its lavishly detailed environments and demanding physics engine can still stress every part of a system. To put each of our CPUs to the test, we took a one-minute run through the grassy area at the beginning of the “Welcome to the Jungle” level with settings cranked at 1920×1080.


We generally expect Crysis 3 performance to scale with the number of cores available, but the game doesn’t actually run better as the core counts climb past 10. The i9-7960X and i9-7980XE land midpack in our average-FPS figures, and they turn in the “worst” 99th-percentile frame times  of this bunch. When you’re still delivering 99% of frames at an instantaneous rate of about 73 FPS, it’s hard to call the overall experience bad with any of these chips.


None of our chips spend any time past the troublesome 50-ms or 33.3-ms marks, thankfully. We’d expect the spikiness of the Core i7-7820X to begin showing up at the 16.7-ms mark, and it kind of does. Thing is, all those spikes only add up to four milliseconds of a one-minute test run spent past 16.7 ms. Moving down to the 8.3-ms mark, the Core i7-7820X is spending less than a second of our one-minute test run holding up the powerful GeForce GTX 1080 Ti, while the higher-core-count Core i9s log about three to four seconds of tough frames at that mark.  I can’t say I notice these spikes as anything more than fleeting unsmoothness in gameplay, and while we’d obviously like a flatter line in our frame-time plots, it’s hard to argue that the i7-7820X is delivering a subpar gaming experience.

Even at the tough 6.94-ms threshold, the i7-7820X spends a perfectly respectable three seconds of our test run holding up the graphics card. More troublesome is the fact that the i9-7960X spends about six seconds of our test run below 144 FPS, while the i9-7980XE spends about eight seconds in a similar predicament. Whatever issue is causing that hold-up, it’s not exhibited by the lower-core-count Intel chips or Ryzen Threadrippers.

 

Watch Dogs 2
Watch Dogs 2 can seemingly occupy every thread one can throw at it, so it’s a perfect CPU test. We turned up the eye candy and walked through the forested paths around the game’s Coit Tower landmark to get our chips sweating.


Here’s a case wehere Intel’s latest and greatest prevail. All of our Skylake-X chips hang just a couple of frames per second apart on average, and they all cluster toward the top of the 99th-percentile frame time chart—all of them, of course, save the i7-7820X.


Once again, only a few frames show up past the 16.7-ms mark for any of these chips, so we have to consider stricter stuff. At the 8.3-ms mark, the Skylake-X chips—even the Core i7-7820X—all spend several seconds less than the Ryzen competition on tough frames that hold up the graphics card. Once again, the i7-7820X’s spikiness looks bad on a graph, but its actual impact on gameplay simply isn’t a big deal.

 

Grand Theft Auto V
Grand Theft Auto V can still put the hurt on CPUs as well as graphics cards, so we ran through our usual test run with the game’s settings turned all the way up at 1920×1080. Unlike most of the games we’ve tested so far, GTA V favors a single thread or two heavily, and there’s no way around it with Vulkan or DirectX 12.


GTA V doesn’t much like running on many-core chips, and the complex fabrics and meshes of this server-class hardware doesn’t seem to help. Past 10 cores and 20 threads, performance begins to regress in our average-FPS charts on both AMD and Intel CPUs, and the i9-7960X and i9-7980XE fall toward the back of the pack in both average FPS and in our 99th-percentile frame time metrics. As usual, the i7-7820X is off in the weeds a bit, too.


As we’d expect from those 99th-percentile frame time figures, the i9-7960X spends about seven seconds of our one-minute test run past 8.3 ms on tough frames, and the i9-7980XE fares even worse. As usual, the i7-7820X’s frame-delivery weirdness just doesn’t add up to that much time past 8.3 ms. At 6.94 ms, the i7-7900X and i7-6950X leave everything else in the dust.

 

Streaming performance with Hitman and OBS

One of the uses that Intel and AMD have hyped the most for their highest-end desktop processors this year is single-PC gaming and streaming. The most avid Twitch streamers, as we understand it, have tended to set up dedicated PCs for video ingestion and processing to avoid affecting game performance, but the advent of these many-core CPUs may have opened up a world where it might be more convenient to run one’s stream off a single PC.

Although one might wonder why people are still making a hullabaloo about CPU encoding performance when hardware-accelerated game streaming is available from both major GPU software packages, the fact of the matter seems to be that the most demanding professionals still choose to use software encoding. The reason for this is that Twitch and other streaming services have a restrictive bit rate for streamed content. GPU-accelerated services like GeForce Share (nee Shadowplay) and Radeon ReLive make it easy to stream without affecting gaming performance that much, but they might not offer the highest-quality viewing experience to fans within a given bit rate. For achieving the best results possible, the name of the game is still software encoding with x264.

Given the exploding popularity of Twitch and similar services, this review was as good a time as any for us to start digging into single-PC gaming and streaming performance. I’m obviously not a professional streamer by any stretch of the imagination, but I did learn enough to be dangerous with the popular OBS Studio tool for this review. I only offer that caveat because a seasoned professional in this space might use different settings and software to achieve their preferred results. I’m not running a webcam or overlay, for example, and those add-ons could further affect CPU performance.

I played with OBS’ various x264 settings to achieve what looked (to my eye) like the best visual quality possible without unduly bogging down our less-powerful test rigs. To my snobbish eyes, that was the “fast” x264 profile. For perspective, my TR colleague Zak Killian  says he has to use the “veryfast” or “ultrafast” presets to make gaming and streaming possible at once on his Core i7-4790K system. I otherwise configured OBS using Twitch’s recommended guidelines.

It’s worth noting that we’re not considering x264 encoder performance in isolation for this review. While metrics like dropped frames are certainly important to the viewer experience, we don’t have the methods to effectively process or present that data yet. We did monitor stream quality during our testing and ensured that our particular encoder settings weren’t producing choppy or otherwise ugly stream delivery, but we didn’t dive deep into OBS log files or anything to that effect. We might consider these metrics in future articles, but for now, we’re worried about the gameplay experience these CPUs deliver to the streamer.




Throw a single “fast” x264 stream onto these chips, and only the i9-7960X and i9-7980XE remain unaffected by the moderate performance hit that affects the other high-end desktop chips in this lineup. In fact, their 99th-percentile frame times even improve slightly compared to their gaming-only performance. We suspect the extra load keeps the game’s threads from being scattered all over these high-core-count CPUs.

The Ryzen Threadripper CPUs aren’t delivering the highest peak frame rates under this load, but their 99th-percentile frame times are right in line with those of the Skylake-X chips. The Core i7-7820X handily outperforms the Ryzen 7 1800X in our average-FPS metric, but its 99th-percentile frame time is right down there with that of the 1800X. Let’s see just how much time these chips spend at the ragged edge with our time-spent-beyond-X graphs.


Once again, it’s most informative to click over to the 8.3-ms threshold straight away for our streaming tests. Even though it shares a 99th-percentile frame time with the Ryzen 7 1800X, the i7-7820X spends less than a third of the time past 8.3 ms that the 1800X does, at about seven and a half seconds compared to 24. That figure pales next to the rest of the Skylake-X family’s great performance, of course, but it’s still better than even the Ryzen Threadripper 1920X and 1950X can manage. As for the Core i9 CPUs, they’re simply in a different class of performance than the Threadrippers under this streaming workload. The i9-7900X, i9-7960X, and i9-7980XE are barely fazed by these test settings (and neither is the i7-6950X, for what it’s worth).

At least in this simple testing, the Core i9 family seems to be the best thing going for fluid frame rates from a streaming PC. Our testing suggests single-stream game performance with Intel CPUs doesn’t really increase past 10 cores, however. Given that fact, you could probably load down the i9-7960X and i9-7980XE even further without affecting game performance much, perhaps with even higher-quality x264 settings, multiple streams, or with on-the-fly encoding to archival video. Although it might be cheaper to build separate, dedicated PCs for gaming and streaming, there is an undenible convenience to having everything on one box for those who can afford it.

With that, we conclude our gaming benchmarks. As expected, 1920×1080 gaming is something these many-core chips can do in a pinch, but it’s not their forte in the least. Much cheaper CPUs can deliver far better gaming-only performance at this resolution. Game streamers should find a lot to love from any Core i9 (or Ryzen Threadripper) CPU, though we’ll need to load these high-end chips down even further to see where they break in future testing.

Our tests also showed that while the Core i7-7820X does have something of a frame-time consistency problem, its effect on delivered performance is actually pretty minor. If Intel can get to the bottom of the i7-7820X’s stutter in games, it’ll have the best value in high-end desktop processors today. For now, the chip remains on the knife-edge of greatness.

Now that we’ve seen whether these CPUs have game, it’s time to put away the toys and slip into something more business-casual.

 

Javascript

The usefulness of Javascript benchmarks for comparing browser performance may be on the wane, but these collections of tests are still a fine way of demonstrating the real-world single- or lightly-threaded performance differences among CPUs.

Most Core i9 and Threadripper owners will not be browsing Facebook all day, but these three benchmarks offer an idea of how these chips will perform in lightly-threaded web browsing tasks. Unfortunately for the high-core-count Core i9s, the picture is a mixed one. The i9-7960X and i9-7980XE trail especially far behind the pack in the JetStream benchmark, end up about where we’d expect in Octane, and bookend our Kraken results. It seems some of the latency-sensitive tests that make up these test suites don’t play well with the high-core-count Skylake-X die.

Compiling code with GCC

Our resident code monkey, Bruno Ferreira, helped us put together this code-compiling test. Qtbench records the time needed to compile the Qt SDK using the GCC compiler. The number of jobs dispatched by the Qtbench script is configurable, and we set the number of threads to match the hardware thread count for each CPU.

The Core i9 CPUs come out on top in this test, but we’re clearly high on the curve of diminishing returns with Qtbench and 36 threads.

File compression with 7-zip

Our high-core-count Core i9s earn a massive lead in the compression portion of this test, but the i9-7960X, Threadripper 1950X, and i9-7980XE finish in a dead heat in the decompression portion of this benchmark.

Disk encryption with Veracrypt

In the accelerated portion of our Veracrypt test, both the i9-7960X and i9-7980XE lead the pack by a wide margin. The unaccelerated Twofish algorithm lets Ryzen shine, though.

 

Cinebench

The evergreen Cinebench benchmark is powered by Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. The test runs with a single thread and then with as many threads as possible.

On a single thread, Skylake-X chips take a moderate lead over the Ryzen competition, as we’d expect. Turbo Boost Max 3.0 probably helps, but so does the Skylake architecture’s higher IPC to begin with.

Loose every core of these chips on Cinebench, and the i9-7960X outpaces the Threadripper 1950X thread-for-thread. The i9-7980XE is even further ahead, as it ought to be.

Blender rendering

Blender is a widely-used, open-source 3D modeling and rendering application. The app can take advantage of AVX2 instructions on compatible CPUs. We chose the “bmw27” test file from Blender’s selection of benchmark scenes to put our CPUs through their paces.

Blender loves it some threads, but neither of the high-core-count Core i9s open much of a lead over the Threadripper 1950X in this test. It seems there’s a wall of diminishing returns around 16 cores in this benchmark.

Corona rendering

Here’s a new benchmark for our test suite. Corona, as its developers put it, is a “high-performance (un)biased photorealistic renderer, available for Autodesk 3ds Max and as a standalone CLI application, and in development for Maxon Cinema 4D.”

The company has made a standalone benchmark with its rendering engine inside, so it was a no-brainer to give it a spin on these CPUs. The benchmark reports results in millions of rays cast per second, and we’ve converted that figure to megarays for readability.

If Blender doesn’t see much benefit from the move to these many-core chips, Corona sure does. Both the i9-7960X and i9-7980XE jump far ahead of the Threadripper 1950X here.

Handbrake transcoding

Handbrake is a popular video-transcoding app that recently hit version 1.0. To see how it performs on these chips, we’re switching things up from some of our past reviews. Here, we converted a roughly two-minute 4K source file from an iPhone 6S into a 1920×1080, 30 FPS MKV using the HEVC algorithm implemented in the x265 open-source encoder. We otherwise left the preset at its default settings.

The i9-7960X and i9-7980XE shave 40 seconds off the Threadripper 1950X’s time, but the x265 encoder doesn’t seem to gain anything from having 18 cores to play with.

CFD with STARS Euler3D

Euler3D tackles the difficult problem of simulating fluid dynamics. It tends to be very memory-bandwidth intensive. You can read more about it right here. We configured Euler3D to use every thread available from each of our CPUs.

It should be noted that the publicly-available Euler3D benchmark is compiled using Intel’s Fortran tools, a decision that its originators discuss in depth on the project page. Code produced this way may not perform at its best on Ryzen CPUs as a result, but this binary is apparently representative of the software that would be available in the field. A more neutral compiler might make for a better benchmark, but it may also not be representative of real-world results with real-world software, and we are generally concerned with real-world performance. With all that in mind, it should be no surprise that the Core i9-7960X and i9-7980XE are the undisputed champions of Euler3D, and by no small margin.

 

Digital audio workstation performance

One of the neatest additions to our test suite of late is the duo of DAWBench project files: DSP 2017 and VI 2017. The DSP benchmark tests the raw number of VST plugins a system can handle, while the complex VI project simulates a virtual instrument and sampling workload.

We used the latest version of the Reaper DAW for Windows as the platform for our tests. To simulate a demanding workload, we tested each CPU with a 24-bit depth and 96-KHz sampling rate, and at two ASIO buffer depths: a punishing 64 and a slightly-less-punishing 128. In response to popular demand, we’re also testing the same buffer depths at a sampling rate of 48 KHz. We added VSTs or notes of polyphony to each session until we started hearing popping or other audio artifacts. We used Focusrite’s Scarlett 2i2 audio interface and the latest version of the company’s own ASIO driver for monitoring purposes.

A very special thanks is in order here for Native Instruments, who kindly provided us with the Kontakt licenses necessary to run the DAWBench VI project file. We greatly appreciate NI’s support—this benchmark would not have been possible without the help of the folks there. Be sure to check out their many fine digital audio products.


At 96 KHz and a buffer depth of 64, we are most likely testing per-core throughput more than anything. By that measure, it’s perhaps not surprising that chips with the most L2 cache and the highest clocks perform the best. Relax the buffer depth to 128, and performance improves on all these chips. For the i9-7960X and i9-7980XE, though, the performance increase is simply jaw-dropping. These are by far the best-performing chips in DAWBench VI at a high sampling rate.


The DAWBench DSP test is a much more even playing field for our test subjects at 96 KHz. The Threadripper 1950X can keep up with the i9-7980XE here. In fact, the i9-7960X emerges as our champion at both buffer depths.


Even if we halve our sampling rate, the blue team retains a wide lead. The Core i9-7960X basically doubles the Threadripper 1950X’s performance at a buffer depth of 64, a figure that maps nicely onto the vast gulf in floating-point throughput between Zen and Skylake-X we saw early in this piece. The only thing keeping the gulf from being wider at 128 samples is that we maxed out the number of voices of polyphony available in DAWBench VI. Kind of astounding, really.


Once again, DAWBench DSP evens out the playing field for these chips. The Threadripper 1950X has no trouble keeping up with the highest-end Core i9s here.

Our expanded DAWBench testing continues to suggest that Ryzen Threadripper CPUs are great chips for work that involves truckloads of DSPs, but virtual instrument performance is still far and away the domain of Intel chips. If you need competence in both domains, Skylake-X CPUs are the way to go. 

 

Power consumption and efficiency

We can get a rough idea of how efficient these chips are by monitoring system power draw in Blender. Our observations have shown that Blender consumes about the same amount of wattage at every stage of the bmw27 benchmark, so it’s an ideal guinea pig for this kind of calculation. First, let’s revisit the amount of time it takes for each of these chips to render our Blender “bmw27” test scene:

Next, we check system power draw at the wall using our trusty Watts Up power meter:

Strangely, load power consumption for the Ryzen Threadripper duo is up compared to our initial review of those chips. We’re using a new version of Blender, updated firmware on all of our Ryzen boards, and a higher memory speed, so those changes might explain the higher power consumption. These results were repeatable, so we can probably chalk them up to the changes in our test environment since our initial review. In any case, the high-core-count Skylake-X chips consume less power under load compared to the Threadrippers, and that augurs well for their efficiency in our final reckoning.

To estimate the task energy consumed in joules for our rendering workload, we simply multiply the time each chip needs to complete the scene by its load power. We can then plot that task energy figure against time-to-completion to get this intuitive view . The best results in our efficiency scatter tend toward the lower left of the chart, where power consumption is lowest and time to completion is shortest.

At least with our particular test rigs, the Core i9-7960X and i9-7980XE consume moderately less power than the Ryzen Threadripper 1950X while delivering slightly better performance. At least at stock speeds, the common hand-wringing about power consumption and efficiency with Skylake-X seems a bit overblown to us. Overclocking these chips is a different story, of course, but not one we’ll be exploring today.

 

Conclusions

It’s time once again to condense all of our test results into our famous value scatter plots. We use a geometric mean of all of our real-world results to ensure that no one test has an undue impact on the overall index. First up, let’s look at gaming performance.


The short take here: don’t drop $2000 on a CPU for 1920×1080 gaming. The Core i9-7960X and i9-7980XE are not going to deliver world-beating performance for high-refresh-rate gaming at low resolutions, and you can build a system that’s far better at that purpose for far less money. If you have a GTX 1080 Ti like the one we used in our test rig, you should really be hooking it up to a 2560×1440 or 3840×2160 monitor to begin with.

Taking these results for what they’re worth, however, the Core i9-7960X and i9-7980XE don’t deliver an appreciably better gaming experience than the Ryzen Threadripper 1920X and Threadripper 1950X. The Core i9s do shine in game streaming and multitasking, however, and we’d highly recommend you be planning to do more than gaming alone before you drop this kind of coin on a processor.

Shocker: the Core i9-7960X and Core i9-7980XE are easily the two fastest CPUs we’ve ever tested for productivity workloads. Perhaps because it has to trade a bit of all-core clock speed for core count, though, the i9-7980XE does little to separate itself from its 16-core sibling in most of our tests. There are a few cases in our testing where the extra cores help the i9-7980XE distinguish itself from the i9-7960X a bit, but not many. We gotta admit: part of the problem is finding enough things to do with 18 cores on the desktop to begin with.

Even with that caveat in mind, the i9-7980XE doesn’t seem worth the whopping $300 extra over the i9-7960X unless you absolutely must have every last drop of performance possible from an X299 system. That’s nothing new for high-end Intel CPUs, though. We’ve long suggested that taking one step back from the top will get you most of the performance of the top-of-the-line chip for a lot less money, and that’s as true of the i9-7960X as it’s ever been.

Another mark against the professional-grade performance of the i9-7960X and i9-7980XE is that the X299 platform has both feet firmly planted on the consumer side of the fence. X299 motherboards don’t support ECC RAM, and they have a hard limit of 128GB of memory. Intel will sell you ECC support and a higher RAM ceiling on one of its Xeon-W CPUs, but the price tags of those chips are truly eye-watering when compared to their X299 counterparts. AMD’s X399 platform offers both ECC support and a higher theoretical RAM capacity than X299, no questions asked. I have a hunch that we haven’t seen the upper limit of Threadripper CPUs’ core counts, either. Folks whose workloads aren’t bolstered by what Core i9 CPUs have to offer will still find Threadrippers a great value.

Even for those who can benefit from the virtues of Skylake-X, it’s kind of crazy how much dough one has to spend to get a chip that consistently beats the Ryzen Threadripper 1950X. The Core i9-7960X is 70% more expensive than the Threadripper, and the i9-7980XE is twice as expensive. If you look at our final performance index alone, the i9-7960X is about 24% faster than the Threadripper 1950X overall. Given how little its two extra cores seem to add to the equation, the i9-7980XE will never be a good value.

Let’s be real, though: once you’re shopping in the realm of $1000-and-up CPUs, a general sense of value isn’t a major concern. Return on investment seems like a stronger case. If your PC needs to run software as fast as possible and perhaps make you money while doing it, Core i9 systems could let you do more in less time or simply let you do more, period. Our value index necessarily has to ignore the fact that buyers in this range are likely looking at heavy-duty chips for workload-specific reasons. Some parts of our benchmark suite run twice as fast or more than twice as fast on Core i9s compared to the 1950X, and if those cases match yours, the extra cost of a Core i9 might be easier to stomach.

While we’re at it, I have to repeat my continued bittersweet feelings about the Core i7-7820X here. This chip would easily be the best bang-for-the-buck CPU in high-end desktops right now, but for the fact that it continues to exhibit some minor frame-delivery weirdness that other Skylake-X chips don’t. To be fair, you likely won’t notice the vanishing bits of unevenness the i7-7820X introduces to gameplay, but even a hint of unsmoothness shouldn’t be an issue on a CPU this expensive.

If you’re willing to tolerate that possibility, however, the i7-7820X is possibly the smartest way onto the X299 platform. For about $180 more than a Ryzen 7 1800X right now, you get higher performance across the board, more PCIe lanes, and quad-channel memory support. It’s hard to argue with those improvements.

Even with my concerns about pricing and platform parsimony, there is no denying that the Core i9-7960X and i9-7980XE are both awesome CPUs, as well. In both performance and power efficiency, the i9-7960X and i9-7980XE are best-in-class. On top of these chips’ great performance, it’s just flat-out cool that Intel is shipping the high-core-count version of its Xeon silicon on a consumer platform for the first time. (Thanks, AMD.) These are truly cutting-edge processors, and if you have the scratch for one, you won’t be disappointed.

Like this review? Help us continue to perform the dozens of hours of hands-on testing that go into articles like this one by becoming a TR subscriber today. Subscribers get exclusive benefits as part of their membership, and you can contribute any amount you’d like to support us. Best of all, we were doing crowdfunding before it was cool, so your entire contribution (less transaction fees) helps support our continuing work. Thanks in advance for your support.

Comments closed
    • BIF
    • 2 years ago

    So where is the review? The title had me ready.

      • Jeff Kampman
      • 2 years ago

      Er, what? [url<]https://techreport.com/review/32607/intel-core-i9-7980xe-and-core-i9-7960x-cpus-reviewed[/url<]

      • Mr Bill
      • 2 years ago

      Mouse click the picture.

        • BIF
        • 2 years ago

        Oh, that’s new. Not sure I like the extra step.

    • Mr Bill
    • 2 years ago

    A suggestion…
    When running the Stars Euler3D, I’d like to see multiple values of threads/cores for each tested CPU plotted as curves on a scatter plot. like they show in [url=http://www.caselab.okstate.edu/research/euler3dbmplot.png<]this plot[/url<] on the Euler3D site on the project page that you linked in the review. I know its a lot of work but it would very interesting to see.

    • Mr Bill
    • 2 years ago

    Page 1 of the review: [quote<]Here's the full Turbo Boost 2.0 table for each Skylake-X CPU, straight from the horse's mouth:[/quote<] Does AMD have a similar table for the Threadripper / Ryzen line? I got the impression from the AMD reviews that there were not as many steps.

    • Mr Bill
    • 2 years ago

    Could this (below) be due to core frequency roll off as core utilization goes up? Frequency drops quite a bit beyond 12 cores (from table on page 1).
    From page 11…
    [quote<]Blender loves it some threads, but neither of the high-core-count Core i9s open much of a lead over the Threadripper 1950X in this test. It seems there's a wall of diminishing returns around 16 cores in this benchmark.[/quote<]

    • yeeeeman
    • 2 years ago

    I am sorry to say but this review kind of puts the 7980XE in a good light and this is not right. I mean I know that AMD forgot to send you a TR sample, but come on! The difference between what Intel has and what AMD has is little from a performance perspective. TR in most other reviews has come out better than Intel when it comes to power consumption. The only whay you could, as a reviewer, to say that 7980XE is a good buy is to be payed for it. Period.

      • K-L-Waster
      • 2 years ago

      “I don’t like Intel therefore you must be a shill.”

        • chuckula
        • 2 years ago

        He tried to out-Krogoth Krogoth.

        Never mess with the master.

          • K-L-Waster
          • 2 years ago

          Let’s be fair: I’ve never seen Krogoth even come close to accusing the site of being in someone’s pocket because he didn’t agree with the review results.

      • Jeff Kampman
      • 2 years ago

      ok

      • trieste1s
      • 2 years ago

      Can it. TR is fine. You want to go on a righteous rampage, perhaps try An******h.

    • tfp
    • 2 years ago

    I miss the old cache latency graphs as I would like to see the impact with the new cache architecture that Intel is using.

    That said I was wondering of the change in overall cache design was causing the frame latency differences in few games where the new CPUs don’t follow the standard trend.

      • Jeff Kampman
      • 2 years ago

      I did that testing here: [url<]https://techreport.com/review/32111/intel-core-i9-7900x-cpu-reviewed-part-one/4[/url<]

        • tfp
        • 2 years ago

        Thanks for the link, seems that new Arc for cache does have impact, not too surprising. I wonder if the impact stays on the many extra/unutilized cores are disabled. I would expect yes.

    • psuedonymous
    • 2 years ago

    The i7-7820X’s [i<]weird[/i<] performance is frustrating. I was ready to grab one as soon as ASRock's ITX board dropped, but at this point it's worth waiting until Coffee Lake to see how that plays out. You lose two cores, and a couple of m.2 slots (an Optane & m.2 NVME OS/app drive + separate slower m.2 high capacity drive setup is still a tempting idea) but gain single-thread performance and potentially avoid the 'spiking' if the i7-7820X cannot be fixed through microcode updates.

      • chuckula
      • 2 years ago

      If you want to play games Coffee Lake will be better in general, although having 8 cores will help more with parallelized workloads.

      A lot of the spiking has been attributed to the L3 mesh and core turbo boost modes being wonky on the chip in some situations. Doing some manual settings can help alleviate those issues.

      • jihadjoe
      • 2 years ago

      Even in the ThreadRipper review the 7820X just seemed so… inconsistent. In some games it’d be top of the charts, then in another game it’ll suddenly be at the bottom. I wonder WTF is going on in there.

    • Klimax
    • 2 years ago

    [quote<] This may be the first time that Intel has ever had to bring its HCC Xeon die to its high-end desktop platform. Competition is a wonderful thing. [/quote<] Might not be correct [url<]https://www.techpowerup.com/forums/posts/3730384[/url<] (post by cadaveca)

    • kmieciu
    • 2 years ago

    Somehow all that streaming talk brought back a memory of hooking up C64 and VCR …

      • JustAnEngineer
      • 2 years ago

      [url<]https://techreport.com/news/32624/c64-mini-is-a-blast-from-the-eighties-past[/url<]

    • B166ER
    • 2 years ago

    That DAWBench tho….

    • ronch
    • 2 years ago

    This article is about huge 15.x-liter V6/V8 diesel engines from Cummins and Isuzu. Tons of torque. No doubt highly unsuitable for family or personal use but when you need to relocate the Grand Canyon just hook it up to one of these monster diesels and tow it.

    Meanwhile, most of us are content with 2.0L 4-cylinder motors.

      • DancinJack
      • 2 years ago

      I think you’ve invented some new engine that you need to sell to people. 15L in a V6 or V8 is…amazing.

        • James296
        • 2 years ago

        Can I….Can I climb into the cylinders of that V6?

        • kuraegomon
        • 2 years ago

        No he hasn’t: [url<]https://en.m.wikipedia.org/wiki/Cummins[/url<] Scroll to products in the above link, and gaze upon the upper ranges of the IS and QS engines. Ye Gods.

          • Chrispy_
          • 2 years ago

          OH SWEET JESUS

          QSK 23-liter I6 – that’s a gallon per cyclinder!

            • JustAnEngineer
            • 2 years ago

            Compare that to Wärtsilä’s much larger units.

            • jihadjoe
            • 2 years ago

            AFAIK the biggest engines are all inline configuration. Nobody wants to change a piston the size of a car while angling it, so V configurations max out at a certain ‘medium’ size.

        • jihadjoe
        • 2 years ago

        Boat (yacht) engines.

        [url<]https://www.engines.man.eu/global/en/marine/yacht-engines/product-range/Product-Range.html[/url<]

        • ronch
        • 2 years ago

        Nope.

        [url<]http://www.isuzu.co.jp/world/product/industrial/index.html[/url<]

      • Anonymous Coward
      • 2 years ago

      The trouble with analogies that compare these processors to massive industrial/commercial diesels is that the cores, especially in AMD’s case, are as lightweight as those of a regular consumer product. There is no deficit in iron here. Unlike an engine that weights a couple tons by itself, these high-end processors could sit in grandma’s PC essentially without drawing attention to themselves.

      In search of a car analogy that fits, I’d say Intel has wade a W18 (as in a Bugatti Veyron) and AMD showed up with a couple of inexpensive V8’s bolted together, to very nearly the same effect.

        • Klimax
        • 2 years ago

        “very nearly” = “gap that one can drive bus through”…

          • Anonymous Coward
          • 2 years ago

          Considering the performance of Zen and *Lake core-vs-core, any if AMD did any better, it would have defied some law of physics. I can identify just about no drawbacks to their cut and paste engineering.

            • K-L-Waster
            • 2 years ago

            ThreadRipper is basically the same idea as the dual-socket motherboards we used to have back in the 90’s and early 00’s, with the exception that the CPUs are in one package.

            If that sounds like a criticism, it isn’t intended to be. It’s the more modern answer to the question “how can I put more CPU power in one box?”

            The big difference between automotive analogies and computing, of course, is that welding 2 V8s together in the real world would be an engineering nightmare, whereas asking 2 R71800x’s to load balance a highly parallel computing task is pretty much a solved problem provided the code is sufficiently thread-friendly.

            • Anonymous Coward
            • 2 years ago

            Yeah AMD really is just putting 2 … or 4 … sockets into one. They seem to have nailed the execution of a classic move. Intel by contrast has spent (I assume) significant effort, silicon, and power budget on what are somewhat niche problems IMHO: their fancy connection and cache topology, and extra-wide execution units. I imagine they are breaking new ground in those areas, as nobody has tied so many powerful cores together on one piece of silicon before.

            But what has Intel’s pursuit of the ultimate chip left on the table for their rivals to feast on? It seems there are a lot of tasks for which Intel’s price premium simply cannot be justified, and there are even tasks for which they are outright worse. There are also tasks for which the price premium [i<]can[/i<] be justified, so Intel would be leaving money on the table by not charging that premium. My favorite solution would be for Intel to make a 3rd tier in their cores. At the bottom there is the "Atom" or whatever that lives on as, then comes something a lot like Zen, and at the top they have their FP and interconnect monster core.

    • albundy
    • 2 years ago

    7980xe…10 times the price of a r7 1800x but not 10 times the performance? what a deal!

      • K-L-Waster
      • 2 years ago

      10 times? Since when is the 1800X a $200 CPU?

      • travbrad
      • 2 years ago

      Welcome to the high-end part of the market.

      • Klimax
      • 2 years ago

      Actually, it is not that simple:
      [url<]https://www.anandtech.com/show/11839/intel-core-i9-7980xe-and-core-i9-7960x-review/15[/url<] (In few cases it is better value then AMD's solution...)

      • DPete27
      • 2 years ago

      Clearly this is is the first TR perf/$ scatter plot you’ve looked at.

    • GreatGooglyMoogly
    • 2 years ago

    As I’m gearing up to buy a new general-purpose computer for development and DAW stuff, this was a very interesting review article. Thanks! I was set on the 7900X, but this makes me really think twice and I’m super interested in seeing how the 7940X performs (at 1400 USD, 300 USD less than the 7960X, but actually higher base clocks than the 7920X).

    Any plans on reviewing the 7940X in the near future? I was going to order all the parts next week but this gives me pause.

      • Krogoth
      • 2 years ago

      If you need ECC memory support than Xeon Ws equivalents would be better choices and surprisingly they aren’t really that much more.

        • GreatGooglyMoogly
        • 2 years ago

        The W-series doesn’t seem to be out yet, and the MSRPs seem to be around 400 USD higher for the same core count (the 14C hasn’t had its pricing revealed yet, so I’m extrapolating). I also like Skylake-X’s high boost freqs for single-core workload snappiness for whenever I’m not fully entrenched in a DAW.

        Luckily, I don’t need ECC, and especially not the added cost that brings. RAM prices are already so depressingly high that it limits me to 64 GB (I’ll be fine with that, but a year ago I would probably have maxed it out at 128 GB because you can never load too many virtual instruments — well, unless you run out of memory that is).

        • smilingcrow
        • 2 years ago

        The 10 core Xeon W is a bit more than the 14 core X series.
        The 18 cores differ by almost $600 with the W series having lower clock speeds also.
        I imagine the W series boards will also be noticeably higher in price.
        There’s also a smaller range for the W series so it misses out on the 12C and 16C so less choice.
        So overall when you add in the higher price for ECC RAM which is usually the case, at the platform level the overall price difference for the same level of Performance is not that close.

          • Krogoth
          • 2 years ago

          ECC ensures reliability and data integrity which is much more important to a professional who would a potential buyer of a 12-16 core system. The price difference is well worth it.

          That’s what makes the 7960X and 7980X so perplexing. They are workstation/server-tier hardware and price-point without any of the platform features that would attract prosumers/professionals. They are overkill for gamers and mainstreams. They are really just “halo effect” products that are meant to claim the silly “fastest desktop CPU throne”.

            • smilingcrow
            • 2 years ago

            I was simply disputing your claim that the price difference is not that much not on whether ECC is worth it.
            The platform has enough features for prosumers as they don’t all go for ECC but as you say it’s not a kosher workstation platform without it.
            It’s a niche and as you say a halo product to a degree but I wonder how many TR systems are used with ECC RAM? That would be interesting.

            Added. Had a quick look and both of the system builders selling TR systems that I found don’t offer them with ECC.
            One of them builds workstations and use X series, TR and Xeons and only offer ECC with Xeons.
            I wouldn’t be surprised if the majority of TR systems don’t use ECC outside of things like HP workstations.

      • Jeff Kampman
      • 2 years ago

      Intel hasn’t hinted that it plans to sample us an i9-7940X to test, but I don’t think you can go wrong with the higher-core-count chip if it’s in your budget.

    • smilingcrow
    • 2 years ago

    The 1950X is as low as £810 in the EU which seems a lot less than the release price.
    If it was still at £1,000 the i9-7940X (14C) might make more sense for some at £1,200 but the difference is now nearer £400 than £200.
    The Intel boards seem to be almost £100 less so if the CPU difference was still only £200 that would have made the platform differential around £100.
    So maybe there has been an unofficial price decrease to address this?
    Overall the 14C Intel might be a more attractive CPU; ignoring the platform differences.

    • maroon1
    • 2 years ago

    LOL, ryzen does not look like power efficient anymore
    [url<]https://techreport.com/r.x/2017_09_27_Intel_s_Core_i9_7980XE_and_Core_i9_7960X_CPUs_reviewed/loadpower.png[/url<]

      • Concupiscence
      • 2 years ago

      It doesn’t really look awful compared to the competition. This isn’t like the days of the FX line, where the chip would make heat like a wood-fired oven and deliver performance in line with a Nehalem i7.

    • Anonymous Coward
    • 2 years ago

    Intel has delivered some kind of supercar among processors, and AMD gives them a real battle with the minivan from their driveway. Two desktop processors and some glue.

      • Krogoth
      • 2 years ago

      Nah, it is more like Intel threw in a truck engine into a car chassis. Threadripper is pretty much in the same boat.

      They are workhorse processors. They are not well-suited for gaming and mainstream workloads work.

      • Klimax
      • 2 years ago

      Battle? Mostly in old code with SSEx. (Not sure how prevalent it is in workstations) Get to AVX and things look differently.

    • xeridea
    • 2 years ago

    Why no mention of temperatures? It has been shown that these CPUs easily hit 90C+ on 360mm liquid cooler thanks to using TIM on a $2000 CPU.

      • Jeff Kampman
      • 2 years ago

      Wasn’t an issue at stock clocks with a good cooler. Source your quotes.

        • xigmatekdk
        • 2 years ago

        There are trolls/fanboys who based that assessment on the 7980XE under extreme underclocked settings. At stock settings, the 7980XE runs cooler than even the 1950x. Hardware Canucks was able to tame the beast with a midrange Noctua U-14s. With a Noctua D15s, it was able to run it at 64c:
        [url<]https://www.eteknix.com/intel-core-i9-7980xe-extreme-edition-review/6/[/url<] With an AIO watercooler: ~56c [url<]https://www.kitguru.net/components/leo-waldock/intel-core-i9-7980xe-extreme-edition-18-cores-of-overclocked-cpu-madness/6/[/url<]

          • xeridea
          • 2 years ago

          Pretty sure that cooler used on 1950x doesn’t have full coverage, which makes big difference on TR. I get better thermal results on my 1700X than they do, using just a Hyper 212 EVO vs their watercooler, so perhaps they are reporting the offset temperature? They don’t say. The second review doesn’t even list cooler for TR.

          It is a widely known fact that Intel has been using low budget TIM on CPUs for years, and it has horrendous results. Idle and stock temps way higher, OC is limited due to thermals. Intel even recommends you NOT overclock your K CPUs… which you specifically pay extra for for ability to OC, and a lot of people, probably majority OC such CPUs. So Intel is saying they know their thermals suck, deal with it. Everyone is free to buy what they want, I just feel like throwing it out their since it wasn’t mentioned, which seems like an odd decision.

          • Klimax
          • 2 years ago

          Even U-14 is sufficient for stock? Good. One major problem solved. (As long as I can get one of them on air near 4.1 with AVX-512 I am good)

        • xeridea
        • 2 years ago

        [url<]https://www.youtube.com/watch?v=gz9HBVh57T8[/url<] By Gamers Nexus

          • chuckula
          • 2 years ago

          1.15Vcore at stock?

          [url=http://www.tomshardware.com/reviews/intel-core-i9-7960x-cpu-skylake-x,5238-2.html<]THG[/url<] overclocked all 16 cores of the 7960X to 4.3GHz at 1.1V. [quote<]We raised VCCIN up to 1.9V, though this does cause thermal output to increase. We also found that up to 2.0V is "generally" safe if you are using water cooling. However, we lowered the VCORE to a mere 1.1V to combat heat. That gave us a semi-stable 4.5 GHz overclock; we were able to run a wide range of heavily threaded workloads, though extended AIDA stress tests exposed throttling. We decided to stick with a 4.3 GHz all-core overclock (84-88°C) because we found that to be the safest setting that wouldn't trigger the aggressive throttling algorithms. The -7960X is very sensitive to increased voltages, and even bumping up to 4.4 GHz resulted in nagging throttling during stress tests.[/quote<]

    • jackbomb
    • 2 years ago

    Meh. Not much faster than my 2600K.

    Just kidding.

      • jihadjoe
      • 2 years ago

      Over clocked Q6600 still good enough…

    • xigmatekdk
    • 2 years ago

    Also why doesn’t the reviewer perform test with Turbo Boost 3.0 turned off vs on? Many people have reported stuttering with that beta crap on their lower core models. it makes sense why the newer chips have even more problems with that, hence the crazy bad frame times.

      • Jeff Kampman
      • 2 years ago

      TBM 3.0 is a headlining feature of these chips and it’ll be used by default whether one installs Intel’s TBM driver or not now (unless a motherboard vendor makes a bad default settings choice). If there’s a problem with the implementation of the feature, our tests need to show that.

    • Unknown-Error
    • 2 years ago

    So the the best overall CPU price and performance whether gaming, non-gaming, streaming, professional work etc, etc is the 7900X. Very well balanced CPU. 1950X and 1920X deserves plenty of respect. But when it comes to the 7960X and 7980XE all I say is…….meh.

      • chuckula
      • 2 years ago

      Upthumbed because even if I disagreed with you (and I agree with most of what you said) your comments were based on information that was actually in TR’s review.

      • Khali
      • 2 years ago

      I came to the same conclusion. If I was going to pick one of these CPU’s the 7900X would be the one. Its a combination of middle of the road and the best bang for the buck.

    • themisfit610
    • 2 years ago

    Benchmarking x265 with high core counts really requires that you do some custom tuning. I don’t know if this is implemented in handbrake.

    I’d contact multicoreware about this.

      • chuckula
      • 2 years ago

      Given how Handbrake behaves on other websites I’ll give TR credit for finding a transcoding operation and settings that actually seem to scale beyond 6 – 8 cores or so. You are right that “Handbrake” is not a generic benchmark by a long shot and the settings are very very touchy.

    • chuckula
    • 2 years ago

    Even though this will be downthumbed for being related to the actual content of the article instead of jumping to a preconceived conclusion, I’d like to thank TR for including the game streaming benchmarks in this article and I hope they will become a part of regular reviews going forward.

    These benchmarks aren’t the simplest things to implement and I’m glad you put in the extra work to do them. Some of us actually read your work.

      • cegras
      • 2 years ago

      You don’t get bonus credit for thanking the reviewer to cover for your absurd behaviour. Go ahead, downthumb me.

      Also, why do you suddenly care about streaming now?

      [url<]https://www.gamersnexus.net/guides/2993-amd-1700-vs-intel-7700k-for-game-streaming[/url<] Background apps affecting minimum fps: [url<]https://youtu.be/y1PjNtkFtHc?t=4m27s[/url<]

        • chuckula
        • 2 years ago

        I personally don’t care that much but lots of people had requested it and nobody had even mentioned that TR had added these tests to the review for the first time.

        As for some other website talking about streaming, I’m not sure how that proves that complimenting TR for doing a streaming test is a bad thing.

    • zorglub
    • 2 years ago

    2 Consideration:
    – Intel is no longer only a CPU maker: they have now a large wallet of optimized software libraries running on their CPU: TBB, DAAL, MKL, CnC, IPP, Media SDK, Media server Studio… Lots of software are now including those libraries for optimizing their application performance in multithreaded environment. It is still a mystery to me as how those libraries will run on the AMD architecture. Will it be as efficient? should we expect a performance cost? AMD does not have such libraries wallet, and for developers it would be the hassle of maintaining 2 branches in the code.

    – Strange enough, albeit a stunning 60 PCIe lines available, there is not much motherboard manufacturer that are taking that advantage to allow for more than 2 or 3 PCIe 16x/16x/8x configuration. Say if you need 4 to 6 PCIe for instance at 8x slots for heavy data capture, processing and archiving nothing is really available on the workstation side. Got to feed those core with data !

      • mistme
      • 2 years ago

      CPU has 44 lanes, asrock x299 oc formula has 5 8x capable pcie slots and there’s a handful of boards with 4 8x capable pcie slots.

    • mczak
    • 2 years ago

    Small mistake in the table on page 1: the i9-7920X has 140W TDP, not 165W.
    IMHO the i9-7980 and i9-7960 are a waste of money, even considering that pretty much by definition these are supposed to be hideously expensive consumer chips.
    At least under stock conditions – the i9-7980 might have 12.5% more cores than the i9-7960, but it has the same TDP, meaning it has to lower clocks if all those cores actually get used, hence even with perfect scaling multithreaded benchmarks it is hardly ever more than 6% faster (and apparently in the not quite perfectly scaling benchmarks the i7-7960 is often faster even).
    A similar story should apply to the i9-7940, hence why this chip is probably the “sweet spot” among those expensive chips. Same TDP again, so barely any slower with perfect scaling benchmarks, and potentially faster if it’s not scaling perfectly.
    Plus the fact that those last 4 cores from i9-7940 to i9-7980 are really expensive (300$ per two cores…).
    I have not seen a single review of the i9-7940, and I’d bet that’s because intel didn’t sample any due to it making the two more expensive ones look quite silly.

    • Takeshi7
    • 2 years ago

    I’m really disappointed you didn’t include LGA 1151 in all of the benchmarks. Just for comparison.

    • jarder
    • 2 years ago

    39 comments so far and 41% of them are by “Chuckula”, I wish I had that much free time 🙂

      • Marios145
      • 2 years ago

      The dude has serious issues with AMD.
      I’ve also seen him attack amd countless times on phoronix.

      edit: quick google search
      “chuckula amd” 6520 results
      “chuckula intel” 3670 results
      “chuckula nvidia” 2050 results

      Is it love or hate?

      • Krogoth
      • 2 years ago

      I “almost” think he is a part-time Intel Marketing rep.

        • jarder
        • 2 years ago

        Well, he did come up with the Epyc uses “glued” cores line:
        [url<]https://techreport.com/discussion/31366/amd-ryzen-7-1800x-ryzen-7-1700x-and-ryzen-7-1700-cpus-reviewed?post=1024163#1024163[/url<] And that was just a couple of months before that became the intel official line. That's proof enough for me 🙂

          • Klimax
          • 2 years ago

          And not original. AMD came up with that long ago during Core2 era. (Or maybe during Pentium D aka 2xNetburst era)

            • K-L-Waster
            • 2 years ago

            What are you trying to do here, ruin a perfectly good witch hunt?

      • HERETIC
      • 2 years ago

      And this is just the “warmup” for 5th Oct

      • Khali
      • 2 years ago

      Its a full time job being the counter shill to “Krogoth’s” AMD Fanboy posts.

      In my opinion both sides have issues. Intel has been coasting and over pricing CPU’s due to no competition. AMD on the other hand can’t quite match Intel in performance and efficiency.

      Then we get the Marketing speak BS that turn out to be lies more often than not. Both companies do it but, past actions have proven AMD is more apt to do so.

      Shilling for one side or the other is nuts. They are both corporations that are only concerned with making money and keeping the investors/board of directors happy. They don’t care about you guys one way or the other.

    • chuckula
    • 2 years ago

    For an example of how important software optimization is to these newer chips to give you some context about all the work that goes on behind the scenes in the TR review, take a look at the performance numbers across TR and Skylake X [url=https://www.phoronix.com/scan.php?page=article&item=clear-linux-7980xe&num=1<]just by changing Linux distros that are tuned differently.[/url<] It should give you a new appreciation for all the subtle things happening in the background.

      • Krogoth
      • 2 years ago

      This has been true since the 8008.

    • Zizy
    • 2 years ago

    I wonder how similarly priced 7551P would fare in non-gaming tests. (2200 vs 1900 eur. Close enough when you are paying that much)

    Strange that you saw no gain in the end from 7980XE (compared to 7960X). Top chip always offered some small advantage pushing you towards it. Tests weren’t threaded enough this advantage would clearly show?

    • brucethemoose
    • 2 years ago

    [quote<] I have a hunch that we haven't seen the upper limit of Threadripper CPUs' core counts, either. [/quote<] Hmmmmmm. Threadripper and Epyc use different sockets (TR4 and SP3), but apparently they're physically identical. A 32C ThreadRipper would need an new 8 channel mobo at the very least... Do you know something we don't, Jeff?

      • chuckula
      • 2 years ago

      Threadripper 2 in October CONFIRMED!

      • ptsant
      • 2 years ago

      As you say, they are physically identical. I suppose the part that is more likely to pose a problem are the PCIe lanes (128 on Epyc). Otherwise, the number of channels is the same and a 24 or 32-core threadripper should be theoretically possible.

      I am not sure it would make sense, though. I mean, TR remains competitive at its price point. Intel can keep the ultra-high-end with its $2k chip…

      • Zizy
      • 2 years ago

      Yes, pins are at the same positions. But not with the same functions I believe.
      It would be definitely possible to make 32C TR with 8 channel mobos, and perhaps full 128 PCIe lanes as well … but this is Epyc already, no need for another product like that.
      It might be possible to make 32C TR where each die has single channel memory attached to it, and 16PCIe lanes. It would be quite hobbled though.
      Easier and more likely would be to have 2 “functional” dies intact, working as now, with 2 dies without any IO capability, serving just to calculate stuff offloaded to them. Easiest modification and the one with the least drawbacks and most gains possible. It would be a pretty cumbersome solution requiring you to reboot when you want to switch from 32C workstation to 16C/8C gaming, but it would still be a single chip useful from gaming to workstation duties. I guess it would cost about 2k though, so you might just want to grab 7551P and be done with it.

      • DragonDaddyBear
      • 2 years ago

      They could just use one memory channel from each die.

        • Waco
        • 2 years ago

        If they don’t change *any* wiring, I’d bet it would be 2 channels from 2 dies, and no channels from the others.

    • Thresher
    • 2 years ago

    I still think it would be useful to run these against the regular consumer line of processors. How are we to know if going to HEDT has any real value for what we’re doing with our mainstream processors?

      • derFunkenstein
      • 2 years ago

      There was a Ryzen 7 1800X this time out, but it would have been nice to see an i7-7700K, too.

        • JustAnEngineer
        • 2 years ago

        I’ll bet you a Coca-Cola that we will see comparisons to a Core i7-8700K next month.

          • JustAnEngineer
          • 2 years ago

          It would have been entertaining to see results for the Core i7-7740X that fits onto the same motherboard as the Core i9-7980XE and that TR recommended in the latest system guide.

          • derFunkenstein
          • 2 years ago

          That’s a sucker’s bet. Of course we do. Lol

    • thx1138r
    • 2 years ago

    I know that gaming will not be the usual workload for these chips, but I find it interesting that Threadripper seems to have a distinct edge in games. Could this be down to latencies from the Mesh interconnect approach? Maybe the infinity fabric thing wasn’t so bad after all…

      • jarder
      • 2 years ago

      There’s some more latency weirdness in the Javascript benchmarks, but it’s not a big deal when you consider these chips have prioritized multi-threaded workloads.

      • chuckula
      • 2 years ago

      You clearly see that some tuning of these chips is required to get games performing optimally.

      The 7900X vs. 7820X is the most interesting set of results where the 7900X is clearly substantially stronger than the 7820X (and the 1950X) in the 99th percentile numbers. By a much larger margin than you would expect simply by adding two extra cores to a chip that already has 8 cores. It’s probably mostly due to how the firmware is configuring the L3 mesh grid to operate to move data between cores. There’s a tradeoff between power efficiency in the L3 at the cost of increased latency vs. running it it faster to reduce latency while burning more power.

        • thx1138r
        • 2 years ago

        Good point, I’d glossed over the 7820X numbers in the review, but looking again, it seems to do particularly poorly when it comes to games, especially when you look at the 99th percentile times and compare those with 7900X.

    • Krogoth
    • 2 years ago

    Reminds me of Socket 478 P4 Extreme Edition back in the day joking called by some as “Emergency Editions”.

    Intel throws in server-tier rejects and converts them into becoming the “fastest desktop chip” to retain the “halo effect”.

    Otherwise, they are somewhat poor value for their price points for most HEDT workloads while [b<]lacking ECC support[/b<]. In terms of price, they aren't far behind their Xeon W counterparts. If you need to get 16-cores from Intel. Do yourself a favor, and just go the extra mile and get a real Xeon W.

      • chuckula
      • 2 years ago

      Try reading the review before you post.

      But if you want to talk about “emergencies” care to explain why these chips are complete failures because the margin of victory of the 16 core 7960X over the 16 core 1950X is [b<]greater[/b<] than the margin of victory of the 1950X over Intel's previous "failed" 10 core 6950X? With all these failures it's amazing how Intel manages to turn a profit every quarter.

        • Krogoth
        • 2 years ago

        You are reading too much into it. I didn’t say these chips were failures. They are “emergencies” in the sense that Intel’s marketing need to retain the “halo effect” in HEDT market and didn’t feel confident enough that 7920X could do it when it already does.

        • IGTrading
        • 2 years ago

        We know that both 7960X and 7980XE can offer more performance than the 1950X and, according to all the other reviews online (arstechnica, anandtech, tomshardware, hothardware, tecspot, semiaccurate, etc.) this was to be expected.

        What we find surprising are the actual numbers and percentages and we suspect something is wrong somewhere.

        Either or the testing side or on the final calculations side.

        The power consumption was tested, proven and certified to be way higher on the 7980XE than on the Threadripper 1950X (complete system power) by all the websites.

        The 7960X sits perfectly in its TDP, but the 7980XE is known to be way over its stated TDP, depending on the task running, which should not happen. (proven in detail at anandtech)

        The performance numbers are simply unbelievable and they contradict all other reviews on the internet.

        These are the lowest performance numbers on Threadripper 1950X that we’ve ever seen, despite the memory running at a high (and beneficial for Threadripper) of 3600 MHz.

        Nobody else. Absolutely nobody got an average performance advantage of 24% in favor of the 7960X over the 1950X. The average was always under 5% with some particular exceptions like 14% when AVX512 is used.

        Nobody else got 24% performance advantage with less power consumption in favor of the 7980XE , not even mentioning the 7960X which is a bit slower anyway.

        And here we have the 7960X being 24% faster than Threadripper 1950X ?!

        It is extremely hard to believe.

        I personally have been reading TR now for more than 15 years now and I have never suspected TR of any bias nor do I suspect anything similar now, but there is something wrong with the numbers.

        For example, in CineBench, we see that the score for Intel’s chips are HIGHER when compared with other reviews and that is to be expected, because of the overclocked RAM.

        But AMD’s numbers are considerably LOWER than other reviews, despite having the same overclocked memory.

        The only difference we can see is the storage system which was different on the AMD test bed, but that shouldn’t influence the results so much.

        It would be great if TR could have another look at everything.

        These are the worst results for AMD’s Threadripper 1950X out of all 7980XE reviews on the internet 🙂 It is hard to believe TR just ended up with a particularly bad sample, but the 1920X as well ?!

          • chuckula
          • 2 years ago

          [quote<] The power consumption was tested, proven and certified to be way higher on the 7980XE than on the Threadripper 1950X (complete system power) by all the websites.[/quote<] All the websites? Fascinating. Although I don't really care about power [b<]consumption[/b<] I care about power [b<]efficiency[/b<] and that's what TR has [b<]always[/b<] measured in these tests. But if you don't believe me, Anandtech says that Skylake X has better power efficiency: [url<]https://www.anandtech.com/show/11839/intel-core-i9-7980xe-and-core-i9-7960x-review/14[/url<] So does Phoronix running tests that ought to be perfect for Threadripper: [url<]https://www.phoronix.com/scan.php?page=article&item=intel-7980xe-extreme&num=1[/url<] Oh and by the way, forget about efficiency, HotHardware also tested both the 7960X and 7980XE at lower absolute power levels under load: [url<]https://hothardware.com/reviews/intel-core-i9-7980xe-and-core-i9-7960x-review-and-benchmarks?page=7[/url<]

            • IGTrading
            • 2 years ago

            Anandtech has shown that the 7980XE reached a max power consumption of 190W which is way past the claimed 165W TDP which is unacceptable.

            All of the Skylake-X is a mess when it comes to TDP.

            Here are the quotes :

            “The Core i9-7900X is rated at 140W TDP, however we measured 149W, a 6.4% difference. ”

            Unacceptable, in our opinion.

            “However the Core i9-7980XE goes above and beyond (and not necessarily in a good way). At full load, running an all-core frequency of 3.4 GHz, we recorded a total package power consumption of 190.36W. ”

            This is unacceptable and horrible at the same time. 16% over the TDP ?

            This is Intel basically lying when selling the chips and claiming a 165W TDP for them.

            Also, we would like to mention that the 7960X has a perfect showing, showing good performance and staying perfectly withing its TDP limits.

            If it wouldn’t be for the X299 terrible unreliability, hellish hot VRMs (over 100 degrees Celsius), questionable NVMe compatibility/availability and costs, lower I/O capabilities, humongous price, lower memory capability, the 7960X would be a perfect slap in the face of AMD at 1200 USD.

            But the 7960X it is 700 USD more expensive and has a weak, rushed, unreliable, overheating, feature limited, expensive platform behind it and this makes it a total fail.

            In itself, the processor is quite capable and a good overall design.

            None of the other reviewers tested the limits of the TDP.

            They’ve only tested the power consumption is one or different tasks that did not pushed the platform to its limits.

            We’re not saying that AMD’s Treadripper is a whole lot more efficient. It does well and that’s it, but AMD respects the stated TDPs.

            So it would be a good idea to properly read the articles you link to.

            • jarder
            • 2 years ago

            Newsflash, when have any CPU makers ever stuck to such a rigid definition of TDP. At least read the wikipedia article on TDP which states:

            [quote<]The TDP is typically not the largest amount of heat the CPU could ever generate[/quote<] and [quote<]Some sources state that the peak power for a microprocessor is usually 1.5 times the TDP rating[/quote<]

            • IGTrading
            • 2 years ago

            Yes, we know. 🙂 But system designers and manufacturers don’t use Wikipedia as the technical reference for a design point, but the materials Intel provides.

            AMD seems to be sitting very well withing their TDP on all Zen processors, which is a very good sign.

            After years of improving efficiency, we were surprised by Intel’s lack of reliability when it came to X299 and Skylake X.

            [url<]https://www.intel.com/content/dam/doc/white-paper/resources-xeon-measuring-processor-power-paper.pdf[/url<] TDP = Total Design Power by definition. This is used to design the motherboard and the cooling system to give designers a clear limit over which the system doesn't go unless it is purposely overcloked. Wikipedia : "The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component (often the CPU or GPU) that the cooling system in a computer is designed to dissipate under any workload." Intel : "TDP (Thermal Design Power) Intel defines TDP as follows: The upper point of the thermal profile consists of the Thermal Design Power (TDP) and the associated Tcase value. Thermal Design Power (TDP) should be used for processor thermal solution design targets. TDP is not the maximum power that the processor can dissipate. TDP is measured at maximum TCASE.1" Intel : "Due to normal manufacturing variations, the exact thermal characteristics of each individual processor are unique. Within the specified parameters of the part, some processors may operate at a slightly higher or lower voltage, some may dissipate slightly higher or lower power and some may draw slightly higher or lower current. As such, no two parts have identical power and thermal characteristics. However the TDP specifications represent a “will not exceed” value. " This is what I've understood by TDP in the past 21 years while in IT hardware industry. When it comes to warranty, legal obligations, engineering, design, the technical specs are very important. If Intel wants to add AMD's ACP next to the classic TDP, they should clearly state so.

            • jarder
            • 2 years ago

            Now I see the problem, you’re working off you own personal definition of TDP. One that is not shared by anybody.

            [quote<]TDP = Total Design Power by definition.[/quote<] That's the first time I've ever seen it defined that way. Most people use the "Thermal Design Power" or possibly "Typical Dissipation Power" definition. Seriously, google it and update your personal dictionary.

            • IGTrading
            • 2 years ago

            Mate, I’ve been in this industry for 21 years now and the only thing that can change are the marketing acronyms, not the technical specs.

            For technical specs, a company can be sued.

            I’ve added Intel’s own documentation since you brag about using Google, but you don’t seem to get the results.

            Link : [url<]https://www.intel.com/content/dam/doc/white-paper/resources-xeon-measuring-processor-power-paper.pdf[/url<] So it is not "my" definition, is Intel's own definition, which the company doesn't really care to respect. #readmore

            • chuckula
            • 2 years ago

            [quote<]Mate, I've been in this industry for 21 years now and the only thing that can change are the marketing acronyms, not the technical specs.[/quote<] Says the guy who registered yesterday with copy-n-pasted walls of text and youtube links but no facts. Incidentally, from your use of the word "Mate" you wouldn't happen to be the same loser who made that Youtube anti-Intel butthurt propaganda video and has been spamming tech sites with links to it for your own monetized gain are you?

            • jarder
            • 2 years ago

            From page 2 of the doc you linked to:
            [quote<]TDP is not the maximum power that the processor can dissipate[/quote<] QED.

            • IGTrading
            • 2 years ago

            The only thing demonstrated by the above post is that it is always useful to read more than 8 words out of a technical document which is 8 pages long.

            After having your previous post nullified by Intel’s own technical documentation , you take a single proposition out of context and demonstrate that you really DID read just 8 words 🙂

            “TDP is a worst case value when running “worst case” application, most processors, when running a more “typical” workload, will dissipate power that is less than the rated TDP value; ”

            Therefore the power consumption of 7980XE is 16% worse than Intel’s “worst case scenario”.

            Moreover : “It is possible to write “virus-like” code that toggles transistors in the processor on and off, but doesn’t do any real work. ”

            I assure you that Anandtech did not write a single “virus-like” line of code to cause 7980XE to grossly surpass its claimed TDP by 16%.

            Also : “The TDP specifications represent a “will not exceed” value”

            This is a technical specification Intel imposes on itself, but is unable to deliver with the 7980XE.

            #readmore

            QED 😉

            Personal opinion : IMHO, Intel’s 7960X seems to be perfectly able to respect its stated specifications so it is normal to expect the same from a CPU that’s 300USD more expensive.

            Also, Coffee Lake has just been thoroughly tested in Romania yesterday and it seems to be way more efficient than Skylake-X. It actually shows less power consumption than AMD’s Ryzen while matching or exceeding it in performance at a lower price.

            So Intel can offer high quality, competitive and reliable hardware, but the X299 and its processors is not such a solution.

            QED 😉

            • jarder
            • 2 years ago

            so many words! did you admit yet that your definition of TDP was wrong yet?

            • IGTrading
            • 2 years ago

            It is Intel’s own definition from the technical documentation I’ve just posted you refuse to read. 🙂

            It is the total power dissipation for which the cooling system is designed to cool.

            Intel says “design a 165W cooler and you’ll be withing specs” and then you power up 3 or 4 VMs for everybody in an architecture studio to use for rendering and heavy tasks.

            When everybody gives their own VM some heavy work, the cooling system will prove 16% undersized leading to throttling and overall high temps which eat away at the system’s overall reliability.

            More reading : [url<]http://encyclopedia.thefreedictionary.com/Thermal+Design+Power[/url<] P.S. Claiming that Anandtech went and did something special to make their 7980XE to reach a power consumption over 16% higher than the stated TDP is nonsense. If Intel is not able to keep a 2000 USD processor withing their own specifications, then the company has a serious reliability problem with their X299 solutions. Add the fact that the thermal resistance of high quality TIM rises by about 25% over the course of 1000 to 2000 cycles, you'll see how that "fake" TDP that Intel claims will be a real issue. #readmore

            • raddude9
            • 2 years ago
            • kuraegomon
            • 2 years ago

            The takeaway from this is actually pretty simple. In order to hit the 7980XE’s desired clock rates – and achieve the requisite separation from Threadripper – Intel had to get loose with their power budget. We’ve all seen many instances of this before – most recently from AMD’s own RTG with Vega 🙂

            I don’t think we need to sally forth with pitchforks and torches to punish them for their iniquity. It’s just appropriate to note that you’ll need beefier cooling than the stated TDP implies, and move on.

            • kuraegomon
            • 2 years ago

            After reviewing the appropriate sections of said document, I believe IGTrading is correct here. His ranty delivery and subsequent down-thumbing is obscuring that, but his (actually Intel’s) definition of TDP is as described in the doc:

            [quote<]TDP. Thermal Design Power. The thermal design power is the maximum power a processor can draw for a thermally significant period while running commercially useful software. The constraining conditions for TDP are specified in the notes in the thermal and power tables.[/quote<] AnandTech, TR, and every other reputable review site are certainly running [i<]"commercially useful software"[/i<] (or a reasonable facsimile thereof) in their tests. If a CPU exceeds its TDP for a [i<]"thermally significant period"[/i<] of time under those conditions then - by [b<][i<]Intel's definition[/i<][/b<] - then that CPU is failing to respect its TDP.

          • Jeff Kampman
          • 2 years ago

          I’ve said this before and I’ll say it again: other sites are not running the same tests that we run. You cannot cross-compare performance indices derived from different data sets gathered from different test rigs.

          We may show a larger lead for the i9-7960X in our final reckoning than most sites because of our digital audio workstation testing. Intel CPUs have a large and repeatable advantage in DAWBench VI, and since we are one of exactly two sites that runs DAWBench, our results will appear more favorable to Intel CPUs than most everybody else because of that data.

          For what tests I can cross-compare with other sites, our results are in line with what others are reporting. If anything, we seem to put the Threadrippers in a better light than average because we threw DDR4-3600 memory in our AMD and Intel rigs alike. A lot of sites are using older versions of common software that may not be optimized for the latest-and-greatest chips, and that may also explain some of the divergence.

          If there are actual outliers (i.e., greater than plus or minus 5%) that you can link to—[i<]all else being equal[/i<]—that I should be aware of, I'm all ears. I'm just not seeing anything in other sites' results that justifies the grave nature of the accusations you're leveling.

            • IGTrading
            • 2 years ago

            Thank you Jeff, you’re right on the audio processing tests. For me personally it was surprising to see that from an aggregate difference of 5 to 15% on everybody else’s testing, you got a difference of 24% (and that on the 7960X, not the flagship) .

            It is a surprise to see that the audio testing score would push Intel’s average scores up by around 60% (from a 15% Intel average advantage to your result of 24%). 🙂

            I wonder what compiler do those apps use. (when it comes to Intel, we always expect at least one dirty trick somewhere 🙂 – See : [url<]https://www.youtube.com/watch?v=osSMJRyxG0k[/url<] ) That's exactly what we've said : the high RAM frequency really does help AMD's architecture. So in a scenario where AMD should have at least "some" sort of advantage or at least an almost "perfect" scenario for AMD, the Intel advantage increases instead of decreasing. On a side note, I find it very nice and tidy how TechSpot uses the per-task value graphs : [url<]https://www.techspot.com/review/1493-intel-core-i9-7980xe-and-7960x/page5.html[/url<] So an Adobe guy would clearly know what platform to go for Adobe type of workload, a Blender guy would know what to go for, a Maxon guy as well and so on ... After asking my colleagues, we happen not to have any workstation client with an audio specific workload (my boss just got sparkling eyes now 🙂 ) . So for all our clients, a value (or energy efficiency) graph that is strongly affected by audio testing scores, is not really useful. Would you give some consideration to adding some per-workload-type value or power graphs ? I'll have another look over it during the weekend, for now we have a full day of work ahead 🙂

      • chuckula
      • 2 years ago

      [quote<]Intel throws in server-tier rejects and converts them into becoming the "fastest desktop chip" to retain the "halo effect". [/quote<] Yes, and according to AMD literally every threadripper is made from -- at worst -- a 95th percentile Zeppelin die that's cherry picked. Meaning even if TR got literally the worst Threadripper sample chip that ever left the factory it's in the upper 5% of all AMD chips. So -- as we can see from the review -- Intel's "failed" "reject" dies are both faster, more power efficient, and vastly more capable of real overclocking than literally cherry-picked AMD dies. So how screwed is AMD in the server market where the non-failed Intel parts are being sold against AMD parts that didn't make the grade? The funny part is that you've copy-n-pasted that asinine statement at least a hundred times without even thinking about the ramifications.

        • ImSpartacus
        • 2 years ago

        Do you have a source for the Threadripper binning stat?

        I wouldn’t be surprised if that’s true, but I hadn’t previously heard it.

          • chuckula
          • 2 years ago

          AMD’s official launch slides: [url<]https://www.overclock3d.net/news/cpu_mainboard/amd_reveals_that_threadripper_is_built_using_their_best_ryzen_cpu_dies/1[/url<] I'm just assuming their press deck was being honest.

        • Krogoth
        • 2 years ago

        The Intel defense force tirade is tiresome as well as “U COPY + PASTE!” rebuttal.

        BTW, You might want to check power efficiency angle more carefully. It isn’t that cut and dry.

          • chuckula
          • 2 years ago

          [quote<]BTW, You might want to check power efficiency angle more carefully. It isn't that cut and dry. [/quote<] Yeah I read the article here and every other article where even the 7980XE has better power [b<]efficiency[/b<]. Anandtech and Phoronix and TR [edit: PC Perspective too and that's even testing in Holy Cinebench with the 7980XE] and no contradictory results from anybody else. I have sources you have fact-free prejudiced conclusions. I'm sure you'd be totally open to an "opinion" that the 7900X beats the 1950X in Cinebench because you aren't looking at the scores carefully enough. Oh, are you still under the impression that measuring power consumption of an overclocked chip running Prime95 and ignoring the actual performance numbers is the same thing as [b<]efficiency[/b<]?

            • Krogoth
            • 2 years ago

            It depends entirely on the workload in question and if can effectively harness those extra threads.

            For the HEDT market, 7900x is the clear winner in power efficiency outside of certain niches (hilariously parallel stuff) where 1950x, 7920x, 7960x and even 7980XE pull ahead.

            7980XE would be an excellent server chip if it weren’t for lack of ECC support.

      • ImSpartacus
      • 2 years ago

      Idk why you’re getting downvoted. That’s exactly what Intel did.

      I don’t know how anyone could possibly justify otherwise. It’s crystal clear that Intel originally planned an 8-12C Skylake-X, but 16C Threadripper meant Intel had to beat that core count. So they added the 14-18C HCC parts later so they could retain their crown.

      It’s really not complicated, people.

        • chuckula
        • 2 years ago

        It was also “crystal clear” that AMD never intended to launch Threadripper: [url<]http://www.pcgamer.com/amd-engineers-built-ryzen-threadripper-in-their-spare-time/[/url<] So you might say that Threadripper was just a "emergency edition" response to the $600 i9 7820X for that matter. Furthermore, given the binning that Threadripper receives, it makes you wonder why all those high-performing chips are being thrown at the desktop instead of into high-priced server parts. However, you didn't see me hurling insults at Threadripper when it launched over a made-up conspiracy theory. That's why I'm unimpressed with Krogoth's copy-n-paste hit job that clearly shows he didn't bother to read TR's review.

          • bhtooefr
          • 2 years ago

          Could be that the desktop users prefer high-leakage high-clocked parts, whereas server customers want low-leakage efficient parts?

          Basically, the bins of “best dice for Epyc” and “best dice for Threadripper” could actually be two separate bins.

          • Zizy
          • 2 years ago

          Because no AMD server part has high clocks. There is even no 180W high-clocked 8C Epyc. Why waste dies that can reach 4GHz on parts where all will be at <= 3GHz?
          Nah, they just need to grab something that has good enough perf/W, a completely different bin.

            • chuckula
            • 2 years ago

            That raises the question: Why not? If there are all these dies that clock so well and are so power efficient, why not make a higher clocked (not necessarily 4GHz but maybe 3.2GHz base, 3.6GHz boost) 16-core Epyc Server?

            They already have 180watt+ Epyc parts so it’s not unprecedented to do that. There’s clearly a market for Epyc server parts that use 16 cores with lower clocks (AMD sells plenty of them) so why not have a 16 core part with clocks that break 3GHz?

            • derFunkenstein
            • 2 years ago

            Why are you asking questions that you know the answer? He said there were no 8C 180W Epyc parts. And you already know there is no voltage at which a Zen die can hit very high clocks. You can give it 180W. Hell, you can give it 280W. But it’ll never matter.

            • chuckula
            • 2 years ago

            I’m not asking them to hit 4Ghz+ on a 16 core part* I’m asking them to make a 16-core Epyc part with a base clock that’s just a little over 3GHz. That’s all. It’s got four dice instead of two in Threadripper but am I really asking for AMD to violate the laws of physics in sampling four Zeppelin dies that have 4 cores turned off and running at 3.2 GHz? Or even 3GHz? That’s all I’m saying here, and given how “miraculous” Threadripper supposedly is I’m a little confused as to how I’m somehow making unreasonable demands on AMD.

            * After all, only “reject” “failed” Intel parts can do that and AMD is too successful to do that!

            • derFunkenstein
            • 2 years ago

            One day I’ll learn that you ask only rhetorical questions.

            • chuckula
            • 2 years ago

            A rhetorical question is one that doesn’t have an answer.
            I asked a manufacturing question based on simple physics that nobody here wants to answer for some reason since apparently AMD’s insanely successful Zeppelin parts should only be put into consumer platforms and never see the light of day in high-margin servers.

            • derFunkenstein
            • 2 years ago

            There is no answer here. Nobody here knows why AMD did or didn’t do something, or why their product selection is what it is. Your questions have no answers.

            • chuckula
            • 2 years ago

            [quote<]Nobody here knows why AMD did or didn't do something, or why their product selection is what it is. [/quote<] And yet you seem eager to agree that Krogoth's copy-n-paste is the be-all end-all answer to why Intel has launched the Skylake-X line.

            • derFunkenstein
            • 2 years ago

            No, I don’t. Show me where I said that. I’ll wait either for that or for an apology.

            Edit: my comments have been about:

            AMD’s product line
            Your rhetorical questions
            Doing valuable work for free
            Equality in the workplace

            So show me, chuckula, or are you unable to show me?

            • chuckula
            • 2 years ago

            OK, I’ll admit I was wrong about you agreeing with Krogoth if you agree that Krogoth is full of crap for copy-n-pasting troll comments just for easy upvotes by the usual suspects who clearly didn’t bother to actually read TR’s review.

            • derFunkenstein
            • 2 years ago

            c’mon man, how long have you been here? You know that’s my default opinion of Krogoth. I’m surprised you fucked this interaction up so badly, and continue to do so.

            • Mr Bill
            • 2 years ago

            Well except for [url=https://techreport.com/discussion/32319/amd-ryzen-threadripper-1950x-threadripper-1920x-and-threadripper-1900x-cpus-revealed#metal<]Threadripper[/url<]... [quote<]AMD's in-house team of extreme overclockers tried their hand at putting a Threadripper 1950X under liquid nitrogen last night, and the net result was a roughly 5.2 GHz all-core overclock. With 16 cores and 32 threads churning away at those speeds, the chip produced a Cinebench all-core score of 4188. [/quote<] Which implies some future headroom for Zen die with process improvements and shrinks.

            • Zizy
            • 2 years ago

            Before even going to the higher clocked 16C, I would just add another high clocked option to that 8C 120W part. Seems it should be easy enough to make another 3-3.5GHz 180W bin here. I imagine there would be at least some people interested in such a chip and it feels strange not to offer this option – for lightly threaded tasks needing tons of RAM and/or GPUs.

            But even boost clocks are limited to 3.1GHz (and 2.9 for all cores), so I would guess there is some inherent limitation of the Epyc platform. Perhaps interposer cannot stand high clocks for long without burning, so high-clocked Epyc parts are simply impossible, no matter the TDP.
            (TR has just 2 dies = fewer links; so it doesn’t hit these issues)

            • Chrispy_
            • 2 years ago

            I can’t speak for all datacenter managers but for me at least, a server room is constrained primarily by physical space and power usage/efficiency.

            Thanks in part to virtualisation and in part to the steady march towards code that scales better across multiple threads, there really isn’t any appeal to parts that operate outside the GHz-per-Watt “sweet spot”. Assuming I have free power/thermal headroom in an idividual server, or more likely a whole rack, I’m getting better efficiency by throwing more cores at it than increasing the clockspeed.

            There are two other factors that I can think of, off the top of my head:

            1) Software licensing often costs far more than the processors they’re running on and they’re costed per socket, or per server. It doesn’t matter how fast those damn cores are if your increased server count doubles or quadruples your software licensing costs.

            2) If you decrease core count per server, you typically end up decreasing the cache and bandwidth too. Some applications don’t care about cache/bandwidth, but when you find one that does, your fewer-cored, ultra-fast server is going to tank.

            Honestly, anything that clocks outside the 2.0-2.4GHz performance/Watt sweet spot is better suited to workstations and enthusiast HPC market. Those leakier chips can still command several hundred dollars and there’s a reasonably healthy market for them, unlike Dell/HP/IBM who won’t touch the stuff that’s inefficient.

          • derFunkenstein
          • 2 years ago

          I dunno, I’m sure a lot of skunkworks projects turn into real things. Just because the “official” channels don’t see the need for something doesn’t mean employees want to deal with that situation.

          I spent my summer on one that got greenlit a couple of weeks ago. It started out as “how does this stuff work for my own edification” and turned into “hey this is a viable replacement for this legacy thing that totally sucks but we were stuck with it”

            • chuckula
            • 2 years ago

            Yes, and that explanation could just as easily be applied to the HCC silicon being put into LGA-2066. It was a side-project that got the greenlight.

            My overall point is that Krogoth has a disingenuous narrative that comes to a conclusion independently of actual facts. I never ran around calling Threadripper AMD’s “emergency panicked response” to the 7900X.

            That’s even more appropriate given that the $1000 10 core 7900X that launched first basically ties the $1000 1950X in TR’s aggregate non-gaming performance numbers and is clearly ahead in the 99th percentile gaming benchmarks [meaning the 7900X is a better price/performance value than Threadripper] now that its firmware has had a chance to mature. Oh, and TR’s benchmarks completely ignore AVX-512, so it’s not some “cheating” on Intel’s part to unfairly use features that AMD hasn’t invented yet.

            • derFunkenstein
            • 2 years ago

            Was it? I haven’t read that, but if it is then great. Everybody gets to do work for their employers for free. It’s equality in the workplace, and isn’t that what’s important?

            • thx1138r
            • 2 years ago

            [quote<]meaning the 7900X is a better price/performance value than Threadripper[/quote<] Can we please move passed these simplistic arguments. It's just not that simple these days. On average, the 1950X is slightly ahead of the 1900X in non-gaming, and the 1900X is a bit ahead of the 1950X in gaming. Now, if you drill into the actual results some applications favor one chip and other applications favor another. So, "the best chip" for one person is not necessarily "the best chip" for another person, it is down to the workloads they intend to run. [quote<]Oh, and TR's benchmarks completely ignore AVX-512,[/quote<] The benchmarks also ignore the number of PCIe lanes the chips have, again, horses for courses...

      • Klimax
      • 2 years ago

      Actually, might not be the case here. If this post can be trusted:
      [url<]https://www.techpowerup.com/237306/intel-announces-availability-of-core-i9-7980xe-and-core-i9-7960x[/url<]

        • Krogoth
        • 2 years ago

        Skylake-E HHC =! current i9 lineup

        Intel never intended on releasing customer0grade 14 and 16 core SKUs until the Threadripper announcement. Let’s be honest, none of the Ryzens can match 7900X and 7920X and could have taken 7960X and 7980XE spots as halo products.

          • smilingcrow
          • 2 years ago

          Why do you care so much?
          It is what it is regardless of why and how it came into being.
          The bottom line for me is that it’s another option and choice is usually good.
          I have no interest in any of these platforms but it’s good to know they are out there if I ever decide to immerse myself in 4K video work.

            • Krogoth
            • 2 years ago

            Those who still wear “blue-tinted” shades don’t realize that Intel has been coasting since Sandy Bridge. AMD brought sorely needed compention to the market. The current and upcoming SKUs are the result.

            • smilingcrow
            • 2 years ago

            It’s pretty clear to enthusiasts that Intel let the mainstream desktop market stagnate for a very long time.
            Shouting and ranting about that whenever one gets the chance just makes someone look a bit disturbed or simple minded.
            I see more reason to rant at AMD for having nothing to compete with for over 10 years which is what killed the market in terms of competitive pressure but that’s just as futile.
            There have still been great CPUs out there for those that really need the performance for Workstation/Server markets.
            Mainstream desktops is just playing anyway, games, home video and the like, no big deal.
            If you need hardware for work then spend your money as the hardware is there.
            Ranting because you can’t buy cheap toys to play with is infantile.
            Grow up.

            • Krogoth
            • 2 years ago

            What the heck are you raving on about? Did I struck a nerve?

            • smilingcrow
            • 2 years ago

            Pot, kettle, black.

            • Krogoth
            • 2 years ago

            Okay buddy. Keep using idioms that aren’t applicable and ad homiems.

            • trieste1s
            • 2 years ago

            Ranting at AMD for not doing the job is like saying we’re all at fault for not creating a new company to rival Intel.

            We should be completely amazed that AMD actually produced a CPU to force Intel’s hand while having a tiny fraction of every resource that Intel has, and a marketing team that somehow can never satisfy some people no matter how hard they try. Intel itself successfully employed predatory tactics until it got smacked by an antitrust lawsuit. Who here knows how long is the term of long-term consequences if predatory tacticians have their way? If you passively stand aside while that is happening, I don’t think you have the right to accuse the underdog prey of being helpless. In the realm of sociology we call that victim blaming.

            Ironically it seems that some people would praise AMD more if it had died outright, than if it made a comeback like it is doing so right now.

      • Mr Bill
      • 2 years ago

      AMD needs to release a 64 core version for “gaming”/HEDT just to put in the final nail, and possibly generate another Emergency Edition from Intel.

    • ptsant
    • 2 years ago

    Great review! I fully agree with your conclusions. The chips are impressively fast but prohibitively expensive and power hungry.

    I am wondering how much cost the equivalent Xeons (equivalent in core count and approximate MHz) and, inversely, what kind of Xeon you can get with the same money ($2000).

    Would it be feasible to add this information to the graph without noting performance? For example as vertical arrows at the appropriate price points? I suspect many people are wondering about this if they are in the market for $$$$$ workstations.

    — EDIT

    I found an estimate for the price of the highest-end Xeon-W 2195 with 14/28 cores at $2500 (Passmark). Another site estimates $2400 (some random guy). Although $500 is not small change, it might be worth going the Xeon way if you are spending something like $5K on a workstation.

      • K-L-Waster
      • 2 years ago

      Just keep in mind that Xeon motherboards with ECC support are probably also more expensive than x299 boards, so the overall system cost difference would be even broader.

        • ptsant
        • 2 years ago

        I’m not sure this is always the case. RGB lightning is apparently quite expensive.

        Some pretty nice Xeon motherboards are at ~$400-500 where I live. Some of the more exotic HEDT motherboards (ROG Rampage) hit $600, just like the top Threadripper motherboard (ROG Zenith Extreme).

        Anyway, my point is that there is some potential overlap between nice HEDT builds and Xeon builds, although the latter can quickly become extremely expensive when 64GB DIMMs and exotic peripherals (10GBps ethernet, SAS/FC etc) are involved.

        • Krogoth
        • 2 years ago

        Not by that much though considering the budget we are dealing with here.

          • ptsant
          • 2 years ago

          Yeah, exactly my point.

      • tay
      • 2 years ago

      I agreed with Chuck’s conclusions. These chips are matching AMD for price / performance for the most part. Or am I reading something wrong?

        • chuckula
        • 2 years ago

        It you want to say that the 7960X and (especially) the 7980XE aren’t winning any price/performance metrics then I’m not going to argue against it but that’s always been par for the course in HEDT. It’s why far more people are interested in RyZen vs. Coffee Lake than Threadripper vs. Skylake X.

        It’s calling these chips “failures” when they certainly bring the performance (even if they cost too much) that I don’t appreciate.

    • chuckula
    • 2 years ago

    Quick Blender related question. Did you use the recently released 2.79 or was it still 2.78?

      • Jeff Kampman
      • 2 years ago

      Addressed in testing methods but we updated all our rigs to 2.79.

        • chuckula
        • 2 years ago

        Good. I wanted the specific version number since it explains some of the changes in the results that occurred since the 1950X review published last month. Thanks again for your hard work on this!

    • Jigar
    • 2 years ago

    The efficiency results had me scratching my head, apart from TR, every other site is showing Core i9 -7980XE drinking more electricity. What kind of settings were you guys running ? Something different then others for sure.

      • Jeff Kampman
      • 2 years ago

      Stock.

      (DDR4-3600, of course, but otherwise…)

      • chuckula
      • 2 years ago

      Given that the benchmark being used for power consumption was Blender it’s not all that shocking. If you look at the 1950X review [url=https://techreport.com/review/32390/amd-ryzen-threadripper-1920x-and-ryzen-threadripper-1950x-cpus-reviewed/14<]here[/url<] that used the older Blender 2.78c instead of the new 2.79 that was used in this review a few things jump out at you: 1. The Intel systems slowed down somewhat but gained efficiency in the process. 2. The AMD systems sped up somewhat but lost efficiency in the process.

      • xigmatekdk
      • 2 years ago

      Most sites find the 7980XE to be more efficient than 1950x. Hell, they even run cooler despite using thermal paste vs solder.

    • chuckula
    • 2 years ago

    Fascinating change in the 99th percentile results from the earlier Skylake X reviews on the 7900X. Good to see the platform is being tuned better.

    Here’s the official theme song of the review: [url<]https://youtu.be/VEJ8lpCQbyw[/url<] Great review as always!

Pin It on Pinterest

Share This