Intel’s Core i9-9980XE CPU reviewed

Intel’s high-end desktop dominance has been under siege ever since the name Threadripper was first uttered. The blue team’s first round of Core X CPUs, topping out with the i9-7980XE’s 18 cores and 36 threads, held a front against the first wave of AMD’s high-end Ryzens, but PC builders groused nonetheless—and rightfully so. Where did the solder that had joined heat spreaders to dies of past high-end Intel CPUs go, especially on a $2000 chip? Why did X299 motherboards have problems keeping their VRMs cool enough under extreme loads with the platform’s highest-core-count CPUs? Why did quad-core parts exist for the X299 platform at all?

Next to the unrestricted, segmentation-free approach of Threadripper CPUs and the X399 motherboards, the X299 platform and the breadth of the CPUs that could light it up looked by turns stingy and scattered. The Core i7-7800X and i7-7820X offered only 28 PCIe lanes from the CPU, compared to the 44 from the Core i9-7900X and better CPUs in the lineup. The entry-level Core i7-7800X didn’t even benefit from Turbo Boost Max 3.0, one of the headlining innovations of the Core X lineup. Worse, the 16 CPU-powered PCIe lanes from the Kaby Lake-powered Core i5-7640X and Core i7-7740X required motherboard makers to employ complex lane-switching schemes even on high-end mobos that seemed unlikely to ever play host to their four cores.

Intel may have had good intentions in providing builders a wide range of choices and an upgrade path in putting together high-end systems, but the initial headaches of X299 suggested that strategy had stretched the platform a bit too far.

AMD didn’t stand still with its high-end desktop CPUs in the intervening time, either. The Threadripper 2990WX didn’t just challenge the i9-7980XE in some workloads—it actually beat Intel’s highest-end desktop chip in some tasks for less money (though not in every test). Glancing though that blow may have been, the fact that AMD was even able to lay a finger on Intel’s high-end desktop performance crown was an indignity unimaginable just a couple of years ago. The ball has been in Intel’s court since, and that brings us to the new range of Core X CPUs launching today.

Base

clock

speed

(GHz)

Peak

single-core

boost

speed

(GHz)

Turbo

Boost

Max 3.0

peak

speed

(GHz)

Cores/

threads

TDP

(W)

L3

cache

(MB)

CPU

PCIe

lanes

Memory

support

Price
i9-9980XE 3.0 4.4 4.5 18/36 165 24.75 44 Four channels

DDR4-2666

$1979
i9-9960X 3.1 16/32 22 $1684
i9-9940X 3.3 14/28 19.25 $1387
i9-9920X 3.5 12/24 $1189
i9-9900X 10/20 $989
i9-9820X 3.3 4.1 4.2 16.5 $889
i7-9800X 3.8 4.4 4.5 8/16 $589

Intel isn’t classifying these chips as anything other than members of the Skylake family in its official materials, but that nonchalant code-naming scheme hides a range of under-the-hood improvements in the i9-9980XE and its stablemates. These chips benefit from some of the improvements in both eighth-gen and ninth-gen Coffee Lake mainstream CPUs.

First off, these new high-end chips are fabricated on Intel’s 14-nm++ process. 14-nm++ allows Intel’s engineers to lay down transistors that can be driven harder for better performance in exchange for only a slight increase in leakage current. In short, we can expect a better performance foundation for these CPUs without drastic increases in power consumption, and that reinforcement comes out in some minor clock-speed adjustments from top to bottom. Intel now specifies a Turbo Boost Max 3.0 speed of 4.5 GHz across the board for these chips (save for the Core i9-9820X and its victim-of-segmentation 4.2-GHz TBM 3.0 speed). Depending on the chip in question, peak Turbo Boost speeds have also increased anywhere from 100 MHz to 200 MHz (again excluding the odd-man-out i9-9820X). 

Indeed, the existence of the Core i9-9820X suggests the product managers in charge of revitalizing the Core X lineup couldn’t keep the segmentation goblins entirely at bay. Those mischief-makers managed to get one weird chip into the new lineup. The i9-9820X has the 10 cores and 20 threads of its immediate superior, the i9-9900X, but in exchange for a $100 lower suggested price, it loses 2.25 MB of L3 cache, 300 MHz of peak Turbo Boost speed from any given core, and 300 MHz of Turbo Boost Max 3.0 speed from the two best cores on the chip. Perhaps this CPU is meant for overclockers trying to get ahold of 10 Skylake cores for as little cash as possible. For most high-end builders, though, we’d guess the extra $100 for the much-better-on-paper i9-9900X isn’t going to be a major obstacle.

For overclockers who do want to try and push those 10 cores to their limit, Intel has come to its senses about the material it uses to conduct heat from chip to cooler. Following in the footsteps of the Core i9-9900K and friends, refreshed Core X CPUs enjoy the return of solder thermal interface material (TIM). In tandem with the large dies that naturally arise from putting as many as 18 cores on a CPU, that metallic TIM could let overclockers cool these chips without resorting to the risks of delidding and repasting with more thermally conductive materials than Intel’s factory goop.

A conceptual view of the mesh interconnect used to join Skylake Server cores together

The benefits of big chips for heat transfer and cooling could apply to the entire Core X refresh lineup, too. You’ll note that many chips in this new lineup sport more L3 cache than the 1.375 MB per Skylake Server core would naturally add up to. That’s because Intel can disable cores on these chips without turning off the associated slice of L3 cache on the mesh that joins cores and shared caches together, and that fact offers a tantalizing clue as to the silicon being used to make these chips. 

As Ian Cutress at Anandtech has pointed out, the fact that refreshed Core X CPUs boast more L3 cache than active cores would normally offer—especially at the low end—suggests that Intel is using its high-core-count (HCC) Skylake Server die as the starting point for all of the chips in this lineup. Another bit of backup for that suggestion comes from the fact that all Core X CPUs now come with a 165-W TDP, a figure previously reserved for only the four highest-core-count CPUs in the Core X lineup. In tandem with solder TIM and the process improvements of 14-nm++, the use of a uniformly large die across the refreshed Core X lineup could offer better overclocking potential across the board, thanks to the fact that there’s more surface area that can be joined to the heat spreader above by way of that solder.

One final segmentation demon that’s been banished from refreshed Core X CPUs is the 28-PCIe-lane switch that used to get flipped on Intel’s entry-level high-end parts. Every refreshed Core X part offers 44 CPU-powered PCIe lanes for motherboard makers to distribute as they please. While that figure still doesn’t match the 60 PCIe lanes AMD fans can enjoy from every Threadripper CPU, across-the-board consistency from the blue team is a welcome olive branch for I/O- or peripheral-hungry builders who might not have wanted to spend extra for cores, threads, and consequent cooling hardware that might not have been needed on the road to expansion bliss.

Getting to know the Core i9-9980XE

Core i9-9980XE on the left, Core i9-7980XE on the right

Intel only sent us one chip to test today: the highest-end Core i9-9980XE. At $1979, this chip is the latest Extreme Edition standard-bearer. If you want the very best of refreshed Core X, this chip is it. Rather than the vague spec table that Intel provides, let’s dig into the i9-9980XE’s per-core Turbo table and see just what the combination of process tech improvements and solder buys us versus the outgoing Core i9-7980XE.

Number of cores active 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
i9-9980XE Turbo Boost speeds (GHz) 4.5 4.5 4.2 4.2 4.1 4.1 4.1 4.1 4.1 4.1 4.1 4.1 3.9 3.9 3.9 3.9 3.8 3.8
i9-7980XE Turbo Boost speeds (GHz) 4.2 4.2 4.0 4.0 3.9 3.9 3.9 3.9 3.9 3.9 3.9 3.9 3.5 3.5 3.5 3.5 3.4 3.4

Our test motherboard suggests the i9-9980XE incorporates its two Turbo Boost Max 3.0-capable cores into the up-front Turbo table of the chip, rather than leaving the operating system to work with an Intel driver to identify and pin workloads to those cores. Around the time the first round of Skylake-X CPUs launched, Intel said it was working with Microsoft to expose favored cores like those for Turbo Boost Max 3.0 to the operating system directly. Perhaps the incorporation of those 4.5-GHz bins into the Turbo table (rather than the official 4.4-GHz peak Turbo Boost 2.0 speed of the chip) is one puzzle piece in that larger effort.

From three to 12 active cores, the i9-9980XE boasts only a 200-MHz clock-speed boost over its predecessor. Once we reach 15 to 18 active cores, however, Intel seems to have taken advantage of some of the headroom the 14-nm++ process and solder TIM afford to keep Turbo Boost clocks as much as 400 MHz higher than those of the i9-7980XE. We’ll have to check just how much power that move burns later on, but a 400-MHz bump across 18 cores and 36 threads is a juicy improvement. Let’s see just how that improvement plays out in our test suite.

 

Our testing methods

As always, we did our best to deliver clean benchmarking numbers. We ran each benchmark at least three times and took the median of those results. Our test systems were configured as follows:

Processor Intel Core i7-8700K Intel Core i5-8400 Intel Core i7-9700K Intel Core i9-9900K
CPU cooler Corsair H100i Pro 240-mm closed-loop liquid cooler
Motherboard Gigabyte Z390 Aorus Master
Chipset Intel Z390
Memory size 16 GB
Memory type G.Skill Flare X 16 GB (2x 8 GB) DDR4 SDRAM
Memory speed 3200 MT/s (actual)
Memory timings 14-14-14-34 2T
System drive Samsung 960 Pro 512 GB NVMe SSD
Processor AMD Ryzen 7 2700X AMD Ryzen 5 2600X
CPU cooler EK Predator 240-mm closed-loop liquid cooler
Motherboard Gigabyte X470 Aorus Gaming 7 Wifi
Chipset AMD X470
Memory size 16 GB
Memory type G.Skill Flare X 16 GB (2x 8 GB) DDR4 SDRAM
Memory speed 3200 MT/s (actual)
Memory timings 14-14-14-34 2T
System drive Samsung 960 EVO 500 GB NVMe SSD
Processor Threadripper 2950X Threadripper 1920X Threadripper 2920X Threadripper 2970WX Threadripper 2990WX
CPU cooler Enermax Liqtech TR4 240-mm closed-loop liquid cooler
Motherboard Gigabyte X399 Aorus Xtreme
Chipset AMD X399
Memory size 32 GB
Memory type G.Skill Flare X 32 GB (4x 8 GB) DDR4 SDRAM
Memory speed 3200 MT/s (actual)
Memory timings 14-14-14-34 1T
System drive Samsung 970 EVO 500 GB NVMe SSD
Processor Core i7-7820X Core i9-7900X Core i9-7960X Core i9-7980XE Core i9-9980XE
CPU cooler Corsair H100i Pro 240-mm closed-loop liquid cooler Corsair H110i GT 280-mm CLC
Motherboard Gigabyte X299 Designare EX
Chipset Intel X299
Memory size 32 GB
Memory type G.Skill Flare X 32 GB (4x 8 GB) DDR4 SDRAM
Memory speed 3200 MT/s (actual)
Memory timings 14-14-14-34 1T
System drive Intel 750 Series 400 GB NVMe SSD

Our test systems shared the following components:

Graphics card Nvidia GeForce RTX 2080 Ti Founders Edition
Graphics driver GeForce 411.63
Power supply Thermaltake Grand Gold 1200 W (AMD)

Seasonic Prime Platinum 1000 W (Intel)

Some other notes on our testing methods:

  • We tested the Core i9-9980XE in both stock and overclocked configurations. Our overclocked settings used a 45x multiplier for a 4.5-GHz all-core result.
  • All test systems were updated with the latest firmware, graphics drivers, and Windows updates before we began collecting data, including patches for the Spectre and Meltdown vulnerabilities where applicable. As a result, test data from this review should not be compared with results collected in past TR reviews. Similarly, all applications used in the course of data collection were the most current versions available as of press time and cannot be used to cross-compare with older data.
  • Our test systems were all configured using the Windows Balanced power plan, including AMD systems that previously would have used the Ryzen Balanced plan. AMD’s suggested configuration for its CPUs no longer includes the Ryzen Balanced power plan as of Windows’ Fall Creators Update, also known as “RS3” or Redstone 3.
  • Unless otherwise noted, all productivity tests were conducted with a display resolution of 2560×1440 at 60 Hz. Gaming tests were conducted at 1920×1080 and 144 Hz.

Our testing methods are generally publicly available and reproducible. If you have any questions regarding our testing methods, feel free to leave a comment on this article or join us in the forums to discuss them.

 

Memory subsystem performance

The AIDA64 utility includes some basic tests of memory bandwidth and latency that will let us peer into the differences in behavior among the memory subsystems of the processors on the bench today, if there are any.

Some quick synthetic math tests

AIDA64 also includes some useful micro-benchmarks that we can use to flush out broad differences among CPUs on our bench. The PhotoWorxx test uses AVX2 instructions on all of these chips. The CPU Hash integer benchmark uses AVX and Ryzen CPUs’ Intel SHA Extensions support, while the single-precision FPU Julia and double-precision Mandel tests use AVX2 with FMA.

 

Javascript

The usefulness of Javascript microbenchmarks for comparing browser performance may be on the wane, but these tests still allow us to tease out some single-threaded performance differences among CPUs. As part of our transition to using the Mechanical TuRk to benchmark our chips, we’ve had to switch to Google’s Chrome browser so that we can automate these tests. Chrome does perform differently on these benchmarks than Microsoft Edge, our previous browser of choice, so it’s vitally important not to cross-compare these results with older TR reviews.

WebXPRT 3

The WebXPRT 3 benchmark is meant to simulate some realistic workloads one might encounter in web browsing. It’s here primarily as a counterweight to the more synthetic microbenchmarking tools above.

WebXPRT isn’t entirely single-threaded—it uses web workers to perform asynchronous execution of Javascript in some of its tests.

 

Compiling code with GCC

Our resident code monkey, Bruno Ferreira, helped us put together this code-compiling test. Qtbench records the time needed to compile the Qt SDK using the GCC compiler. The number of jobs dispatched by the Qtbench script is configurable, and we set the number of threads to match the hardware thread count for each CPU.

File compression with 7-Zip

The free and open-source 7-Zip archiving utility has a built-in benchmark that occupies every core and thread of the host system.

Disk encryption with Veracrypt

 

Cinebench

The evergreen Cinebench benchmark is powered by Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. The test runs with a single thread and then with as many threads as possible.

Blender

Blender is a widely-used, open-source 3D modeling and rendering application. The app can take advantage of AVX2 instructions on compatible CPUs. We chose the “bmw27” test file from Blender’s selection of benchmark scenes to put our CPUs through their paces.

Corona

Corona, as its developers put it, is a “high-performance (un)biased photorealistic renderer, available for Autodesk 3ds Max and as a standalone CLI application, and in development for Maxon Cinema 4D.”

The company has made a standalone benchmark with its rendering engine inside, so it’s a no-brainer to give it a spin on these CPUs.

Indigo

Indigo Bench is a standalone application based on the Indigo rendering engine, which creates photo-realistic images using what its developers call “unbiased rendering technologies.”

V-Ray

 

Handbrake

Handbrake is a popular video-transcoding app that recently hit version 1.1.1. To see how it performs on these chips, we converted a roughly two-minute 4K source file from an iPhone 6S into a 1920×1080, 30 FPS MKV using the HEVC algorithm implemented in the x265 open-source encoder. We otherwise left the preset at its default settings.

SPECwpc WPCcfd

Computational fluid dynamics is an interesting and CPU-intensive benchmark. For years and years, we’ve used the Euler3D benchmark from Oklahoma State University’s CASElab, but that benchmark has become more and more difficult to continue justifying in today’s newly-competitive CPU landscape thanks to its compilation with Intel tools (and the resulting baked-in vendor advantage).

We set out to find a more vendor-neutral and up-to-date computational fluid dynamics benchmark than the wizened Euler3D. As it happens, the SPECwpc benchmark includes a CFD test constructed with Microsoft’s HPC Pack, the OpenFOAM toolkit, and the XiFoam solver. More information on XiFoam is available here. SPECwpc allows us to yoke every core and thread of our test systems for this benchmark.

SPECwpc NAMD

The SPECwpc benchmark also includes a Windows-ready implementation of NAMD. As its developers describe it, NAMD “is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. Based on Charm++ parallel objects, NAMD scales to hundreds of cores for typical simulations and beyond 500,000 cores for the largest simulations.” Our ambitions are considerably more modest, but NAMD seems an ideal benchmark for our many-core single-socket CPUs.

 

Digital audio workstation performance

After an extended hiatus, the duo of DAWBench project files—DSP 2017 and VI 2017—return to make our CPUs sweat. The DSP benchmark tests the raw number of VST plugins a system can handle, while the complex VI project simulates a virtual instrument and sampling workload.

A very special thanks is in order here for Native Instruments, who kindly provided us with the Kontakt licenses necessary to run the DAWBench VI project file. We greatly appreciate NI’s support—this benchmark would not have been possible without the help of the folks there. Be sure to check out their many fine digital audio products.

A very special thanks also to RME Audio, who cut us a deal on one of its Babyface Pro audio interfaces to assist us with our testing. RME’s hardware and software is legendary for its low latency and high quality, and the Babyface Pro has exemplified those virtues over the course of our time with it.

We used the latest version of the Reaper DAW for Windows as the platform for our tests. To simulate a demanding workload, we tested each CPU with a 24-bit depth and 96-KHz sampling rate, and at two ASIO buffer depths: 96, the lowest our interface will allow at a 96 KHz sampling rate, and 128. In response to popular demand, we’re also testing two buffer depths at a sampling rate of 48 KHz: 64 and 128. We added VSTs or notes of polyphony to each session until we started hearing popping or other audio artifacts.

Apologies for the lack of results at 96 KHz and a buffer depth of 96 here. Thanks to something in the chain of Reaper, Windows 10, and our ASIO driver, our many-core CPUs couldn’t run the 96-96 test at all—we got popping and crackling from the get-go.




 

Conclusions

Let’s summarize the reams of data on the preceding pages using one of our infamous scatter plots. To more accurately represent each chip’s price-to-performance ratio, we used real-world pricing data from Newegg where it was available and manufacturers’ suggested prices where it wasn’t.

For the straight-and-narrow stock-clocked system, the story of the Core i9-9980XE is a simple one. Where Intel CPUs were already superior to the competition, the i9-9980XE offers some nice performance improvements. In applications that can light off Threadripper WX CPUs’ rocket boosters, the i9-9980XE’s under-the-hood refinement can’t overcome the Threadripper 2990WX’s sheer core-count advantage.

Overclock the i9-9980XE, though, and the 18-core chip both extends its leads considerably and closes some of the gaps the 2990WX opens. Part of that performance comes courtesy of Intel’s decision to resume soldering the heat spreaders to the die on its refreshed Core X chips. We couldn’t take our i9-7980XE past about 4.4 GHz without fighting thermal limits, but thanks in part to the reintroduction of solder TIM, we were able to push the i9-9980XE to an impressive 4.5 GHz on all 18 cores without calling the fire department—all with attainable, off-the-shelf cooling hardware. Hallelujah for that.

Presuming the potential errors that overclocking might introduce are tolerable in an i9-9980XE system, those willing to deploy custom liquid cooling loops may find that they can take all 18 cores of this chip to the limits of the underlying silicon, not just what paste thermal interface material might allow. Our casual overclocking suggests those could be some exciting limits to probe. Folks who don’t want to push the limits of their i9-9980XEs may find it easier to cool their chips quietly, too.

The only complaint some may harbor about Intel’s refreshed Core X chips is that the company didn’t see any cause to cut prices on its high-end chips this time around. The reason, I imagine, is that AMD and Intel are trying to sell high-end desktop buyers two different stories of performance as competition heats up at the top of the CPU heap.

In the $1200-and-up range we’re concerned with today, AMD seems plenty willing to make chips that really rip in workloads where sheer core count dominates, like rendering and some varieties of scientific computing. The tradeoff is that Threadripper WX chips can fall far behind in other tasks. That inconsistency leads to Threadripper WX chips’ lower-than-might-be-expected standings in our overall value chart, even as they excel in a few particular workloads.

Intel, for its part, seems unwilling to create chips with any corner cases, even if that means it can’t offer as many raw cores and threads as AMD can for the dollar. Core X CPUs may not take home the gold in every workload, but they’ll never put owners in a position where they can’t run certain tasks (or subsets of tasks) acceptably well, either. The extra money Intel CPUs command per core, then, is essentially insurance that you won’t be left hanging if the idea of inconsistent performance in any potential workload bothers you.

If you know that a Threadripper WX CPU benefits your work and don’t care about the cases where it might not, then those chips can still be screaming performance bargains. Every person needs different things from a PC, so check around and see how your work maps onto AMD and Intel’s high-end desktop platforms before making the leap.

If you have less than $1000 to spend on a CPU, Intel’s latest Core X chips might not move the needle much (presuming you even need four channels of memory and gobs of CPU-powered PCIe lanes to begin with). We already know that the Threadripper 2920X and Threadripper 2950X offer better performance than the Core i9-7900X in many heavy-duty workloads for less money, and 10 higher-clocked Skylake-X cores may not close that gap much in the case of the $889 i9-9820X or $989 i9-9900X. That’s before we consider the $650-ish price tag on the remaining stock of Threadripper 1950X chips, as that processor remains quite formidable in its own right.

If the Core i9-9980XE’s time on our test bench is any indication, though, the reintroduction of solder TIM, the advantages of the 14-nm++ process, and the end of PCIe lane segmentation from the CPU should make all Core X CPUs more attractive to those that want or need what they offer. Intel’s revised 10-core chips will almost certainly overclock better than Threadripper parts, and they should still hold a slight edge in whatever heights of high-refresh-rate gaming a builder might foolishly try to scale on high-end desktop platforms. It’s a shame we didn’t get some of those cheaper Core X chips to test, as we imagine many high-end desktop-building enthusiasts will be more interested in their performance than that of the halo i9-9980XE.

Ultimately, AMD’s X399 platform still offers higher core counts for the dollar and fewer restrictions on RAM capacity or types (including ECC support), but thanks to that renewed competition, fans of the blue team are undoubtedly getting more for their money than they were a year ago, whether in higher clocks, a better process technology, the potential for cooler operation, or more CPU-connected PCIe lanes in lower-end parts.

Whether those improvements are enough to draw buyers back into the Skylake-X fold as excitement builds around AMD’s next-generation server CPUs—and the potential Threadrippers derived from them—is a chapter yet to be written. For the moment, though, Intel is doing its best to put a happy ending on this chapter of its highest-end desktop chips, and a Core i9-9980XE ticking away at 4.5 GHz or better will make even the most demanding enthusiasts quite happy indeed.

Comments closed
    • Dsp4live
    • 9 months ago

    Best CPU at this time for Digital Audio Workstations.
    Personally I like several fast Quad Cores instead of a giant i9-9980XE.
    But I can see me buying an 8 Core Intel as my main workstation and adding my main fast quad to the other 1U fast quads.
    All in a nice 8U ATA Rack.

    Really appreciate the attention paid to Audio Benchmarks.

    • Questors
    • 1 year ago

    How much did Intel pay TR to be their official advertiser? I was sure I saw a SHILL watermark in the background. Maybe if turn my monitor to this side I can see it again. I am so sure I saw it!

    How many different people from how many different parts of the world did various piece of this review and what blind man assembled it? Did you even read your own review?

    Where are the stats for the 7980XE? The stats that show the 9980XE is the exact same chip with solder and slightly higher clocks, but with the same insane price tag? Since it replaces the former, that is rather a crucial point to leave out.

    Let’s try writing unbiased reviews by testing these products against the competitor, the previous part it replaces and list the facts, just the facts and let the reader form their own conclusions?

    It’s called unbiased journalism. AKA known as something that is damn near dead these days.

      • chuckula
      • 1 year ago

      Bad technique, you don’t want to make it sound like you are serious.

      Oh wait… You actually beleive that crap?

      Finally, an example of what a real shill looks like.

      Thanks for having the balls to post early when the thread was still active… Oh wait you didn’t.
      Cowardly and biased, a great combination.

      • Srsly_Bro
      • 1 year ago

      Bro, I’m at times critical of some aspects of the review, but you’re just rambling in a mostly dead thread, hoping no one calls you out for your fake criticism.

      A buyer of either CPU should perform research, and if not they are both CPUs from Intel, with the same core counts, and a little faster on the clock speed, will make it clear to even a best buy shopper that one is a little faster.

      Both CPUs don’t need to be listed for TR to still be unbiased.

      What’s your real issue? The one you made a fuss about isn’t much of one.

      #license2shill

    • BorgOvermind
    • 1 year ago

    Can wait for next year’s i9-9998XE-v2 built on 14nm+++++ that doesn’t beat the previous release.

    “$1000 to spend on a CPU” – I’d spend that on a GPU instead.

      • Questors
      • 1 year ago

      $2000.00 not $1000.00

    • CyberFly
    • 1 year ago

    Hi Jeff, longtime reader here. I noticed recently you were looking for new benchmark ideas. Since the DAWbench benchmarks seem to be starting to cause controversy amongst fans of a certain architecture and digital audio production is an area that has many keenly interested, I have a benchmark suggestion that you can run alongside your DAWbench as a “second opinion” audio performance perspective. It uses Propellerhead Reason which has a free Demo mode which I think is sufficient for you to run the benchmark. If not, you can probably drop Propellerhead a line and maybe they might send you a license.

    [url<]https://forum.reasontalk.com/viewtopic.php?t=7501402[/url<]

      • Dsp4live
      • 9 months ago

      I was going to suggest the lean problem free VST Host Bidule, but no more free demo downloads.
      Nice having DAWbench or any Digital Audio testing done here.

      We’re more of an appreciative group of folks grateful for the effort put into reviewing.
      We’ll make up for all the less satisfied types.
      As they say in Sho Biz, you can always go get your own show.

    • watzupken
    • 1 year ago

    As I read the conclusion, I can’t seem to help feel that there is some sort of Intel bias here. Also may be I am wrong, but it doesn’t seem fair to compare an overclocked Intel part against a stock TR. I looked at the TR review recently, but don’t recall some overclocking done to compare with the Intel parts. So this is a big red flag to me. Objectively, OC vs OC parts makes sense.

      • Klimax
      • 1 year ago

      There are both sets of numbers – standard and OC.

      • K-L-Waster
      • 1 year ago

      Part of the reason for the OC test is to see if going back to a soldered IHS has been beneficial.

      • Questors
      • 1 year ago

      The bias is there and it has nothing to do with OCing. It has to do with the reviewer leaving out some important information and then attempting to guide the reader’s conclusion.

    • HisDivineOrder
    • 1 year ago

    And now we know why Intel didn’t put solder on its chips.

    So it could always have that in their bag for later when they need review sites to give them semi-good reviews. Because if I were reviewing a chip, I don’t think I’d give high marks to any chip where Intel saved an obvious gotcha for the moment when they are caught off guard.

    I’d probably just give them a bad review for that.

    • nerdrage
    • 1 year ago

    [quote<]Harder, better, faster, solder[/quote<] I see what you did there... well done.

    • Unknown-Error
    • 1 year ago

    Serious note. I almost never criticize TR and their excellent work but one reason TR charts/graphs are my favorites is that they are nice, clean and uncluttered. Easy on the eyes and to the point. But this review changed that. With the insertion of the OC numbers, the charts are more….messy. Still much better than for example TechPowerup which has mile long charts and hurt your eyes. More bars does not necessarily mean better.

    • djayjp
    • 1 year ago

    Nobody cares about the 9600k…

      • K-L-Waster
      • 1 year ago

      That’s probably an exaggeration — but certainly anyone who is looking at the 9980 isn’t looking at the 9600…

        • djayjp
        • 1 year ago

        Yeah I know but they still haven’t reviewed it… :/

    • Srsly_Bro
    • 1 year ago

    DAW bench is terrible for AMD. Why use it? Make Intel look better? The 6 core Intel is beating the 16 core AMD.

      • dragontamer5788
      • 1 year ago

      Believe it or not, some people make music on their computers.

        • Redocbew
        • 1 year ago

        Yeah, this is the kind of thing I’d want to know if I was one of those people.

        • jensend
        • 1 year ago

        The question isn’t whether people make music on their computers, it’s whether DAWBench is a representative measuring stick.

        Do you have any desire to use 2000+ voice polyphony in your compositions?

        For many kinds of serious music production, people won’t frequently use more than about two dozen instruments. I imagine few people use much more than 100 instruments, and that therefore the non-DSP tests are simply irrelevant since the slowest CPUs of today sail over that bar with ease.

        I’m not confident in the benchmark. Perhaps it was a good reflection of real-world performance issues in audio production when it was developed, back in 2005. I suspect that in real world use today modern CPUs from both vendors are not bottlenecked in the fashion DAWBench measures.

          • Krogoth
          • 1 year ago

          Pretty much, it is an interesting academic test but CPU hasn’t been an issue for audio production for a while unless you are trying to do audio engineering on some ultra-low power, embedded solution.

            • jamesbond
            • 1 year ago

            Thanks so much for providing the Daw Bench tests. For audio professionals these tests provide a very good comparison of various CPUs over time. Please ignore the comments by people who do not understand their intended use. I have subscribed to your Forum for precisely this reason as few others bother to do tests for people who use their computers for audio production.

            • Krogoth
            • 1 year ago

            They are intended for audio professionals which haven’t been limited by CPUs for almost a decade now. We aren’t running single-core Athlons and Pentium 3/4s anymore. You would be specing the system more for I/O throughput and memory.

            OTOH, DPC Latency (which does matter a lot for this crowd) is entirely depended on platform, drivers, UEFI/BIOS and OS. You got bigger problems if the CPU is the issue.

            • morphine
            • 1 year ago

            [quote<]They are intended for audio professionals which haven't been limited by CPUs for almost a decade now.[/quote<] I take it you don't produce music. That's not true, and has rarely ever been. Contemporary DSP effects hit CPUs [i<]hard[/i<], including but not limited to guitar/bass amp sims (Bias Amp, Amplitube, TH3, etc etc). Just because these new CPUs are insanely powerful in the virtual instrument tests, that doesn't mean that the benchmark is of academic interest -- much the opposite, in fact.

            • Krogoth
            • 1 year ago

            Audio engineers/digital music creators aren’t going to opting for HEDT-tier CPUs though. They would be more likely be using mobile/DTR CPUs for their work.

            In grand scheme of things, CPUs really haven’t been an issue for digital audio and music production in fairly recent years. You are looking more towards platform itself and how to minimize latency.

            • Redocbew
            • 1 year ago

            You are just as wrong as the bare-chested-and-shockingly-hairy-dude jacket that keeps showing up in the ad bars on this end.

            • Krogoth
            • 1 year ago

            This isn’t the 1990s and early to mid 2000s.

            It is precisely the same reason why dedicated hardware DSPs are pretty much a small niche these days. CPUs simply caught up and the overhead of digital audio processing has become relatively trivial even when handling scores of simultaneous audio effects.

            CPU performance isn’t really that much of a issue for digital audio world anymore. Latency is now the big problem for recording and digital musical production. That’s mostly a software issue though.

            • alloyD
            • 1 year ago

            No, hardware DSPs are not niche. Professional audio engineers drop a LOT of money on dedicated DSP modules so that they can offload as much as they can from the CPU. The less capital you have to build a high-end processing rig, the more likely you are to rely more on your CPU.

            • DancinJack
            • 1 year ago

            Kroggy thinks he is THE authority on virtually everything, making sweeping declarations and predictions that he dreams up on the regular. It’s super weird.

            • Redocbew
            • 1 year ago

            To be fair, processing audio is only tangentially related to what I do, but yeah it is a little strange sometimes. I wish it were true that you could easily process any kind of digital audio on a mobile CPU. It’d make some of my current projects a lot easier. 🙂

            • derFunkenstein
            • 1 year ago

            It’s like everything else. The way a professional uses a DAW and the way I use it are two completely different ballgames. They need all the CPU they can get.

            • morphine
            • 1 year ago

            What was the last time that you opened a DAW and wrote a song? What was the last time you spoke to an audio producer? Can you name some heavy-hitting plugins without googling? [b<]What was the last time that you tried to do real-time VSTs/VIs[/b<], something that requires CPU horsepower accompanied by good drivers? Do you run a studio? Because it sounds like the answer to all of the above is "no." Saying something like CPUs haven't mattered for audio for ten years is all manners of wrong and misinformed.

            • Krogoth
            • 1 year ago

            None of that needs the computing power of a modern HEDT-tier CPU outside of outlandish synthetic loads (a.k.a benches which are designed to showcase the difference between CPUs). You would be allocating more of your budget and concerns towards recording equipment, sampler and software to make it all happen then the CPU itself.

            9700K and 8700K in the DAW suite is more then up to the task of real-world workloads without breaking the bank. The real kicker is finding a mobile CPU that can effortlessly handle that and more without venturing into DTR land.

            • DancinJack
            • 1 year ago

            It’s almost like you didn’t even read what he said.

            • TobyAM
            • 1 year ago

            you seem to have solid understanding of the mechanics of audio production, but your real world experience doesn’t seem to reflect the high end of production.

            I’ve been struggling with high end dual Xeon systems and the like for many years with composition rigs and post production, each for different reasons. Yes DPC latency is an issue you always need to optimize, but that is only to make a system under a full work load even operate at all. CPU overhead has always been an issue with extreme sessions.

            I am here reading this article, because the work I do every day often hits a processing limit. FFT processing has become ubiquitous in modern post, and the software has grown to use the newer available horsepower. Now that workflows include much more of this more powerful technology, in series, clock speeds are more important than ever.

            Please stop passionately asserting what you don’t really seem to know.

            • just brew it!
            • 1 year ago

            Sure, you can get by with a low-end CPU if all you’re doing is recording a couple of channels direct to the HDD. But start doing any sort of real production work and you need some muscle. Lots of channels with effects plug-ins and automation running in real-time eats CPU cycles and cores.

            • TheRazorsEdge
            • 1 year ago

            Do you have any idea how hard Autotune works just to correct the hideous warbling of Nicki Minaj?

            • Redocbew
            • 1 year ago

            I’m not sure that’s a problem technology is fit to solve.

            • Klimax
            • 1 year ago

            You need weather modelling supercomputer for that…

            • Wonders
            • 1 year ago

            Hi Krogoth, I’m a fan of yours. You’re clearly very knowledgeable about computing, and a stand-up fellow all around. But in terms of pro audio your comments aren’t wise to the reality. Orchestral composition (film/TV soundtracks) has been plagued by hardware limitations for at least 20 years. Realism and fidelity are perpetual moving targets – just like in graphics. Today many professional orchestral composers still use an array of slave computers over Gigabit ethernet, in conjunction with beastly workstations.

            • dragontamer5788
            • 1 year ago

            I’m no artist of any kind. But my experience with artists tells me one thing:

            Artists always figure out how to use more computing power. There’s always a new effect, be it a photoshop filter, a 3d shader, or audio plugin, that manages to eat up all of your computing resources. [b<]ALWAYS[/b<]. Its always the artists who run into weird bottlenecks. GPU Bandwidth in 3d programs or cache issues for audio plugins and whatnot.

            • derFunkenstein
            • 1 year ago

            It’s like he’s never heard of freeze tracks or why you’d need them. Maybe he’s a Pro Tools user, or works for Avid. 😆

            • curtisb
            • 1 year ago

            Krogoth is the Ken M of the Tech Report.

            • jensend
            • 1 year ago

            If practically nobody wants to use over 200 virtual instruments, then the DAWBench VI test isn’t even “of academic interest,” it’s just a red herring benchmark, like using glxgears to compare video cards. Just as getting a bazillion FPS in glxgears doesn’t mean you’ll get better performance in a real game, getting 3000 voice polyphony in DAWBench VI doesn’t mean better performance when using a sensible number of instruments in real production.

            It’s easier for me to imagine the DAWBench DSP test actually correlating with real-world use, but I still wonder how representative that is as well.

            • TobyAM
            • 1 year ago

            I was thinking the same thing. The description of the test is so vague it leaves a huge swath of variability regarding how one would administer that test. I can also imagine it not being entirely accurate given the possibility of overlooking voice stealing and max polyphony of whatever instrument you’re testing with.

            The DSP test is certainly more interesting compute numbers for DAWs to me.

            • derFunkenstein
            • 1 year ago

            It’s not JUST using 200 VIs or 1300 effects plugins at once . It’s also about how quickly a CPU can process a project and render it to disk via “offline” bounce.

            • jensend
            • 1 year ago

            If that’s what you want to compare, then that’s what you should measure.

            ‘Do I get crackling when I play 3000 voice polyphony with short buffers,’ as a latency-focused test, is an extremely dubious proxy for testing render/bounce throughput.

            • ptsant
            • 1 year ago

            I don’t doubt what you say is true. I don’t see why music production would not make use of CPU horsepower. However, I am incapable of judging whether this specific benchmark is realistic and accurately represent the performance in real-life music-related tasks. I hope it does. To me, it’s irrelevant because (in that order) (a) it’s an outlier and (b) I don’t do music on my pc.

            Whatever the case, it seems horribly non-optimized for AMD processors. I am wondering whether the developers have a specific explanation for this. Is it a specific instruction [set], like AVX256? It is a very specific memory access mode? A specific way of hitting the cache? Is it the use of intel-specific libraries? Someone could ask them I suppose.

            • Klimax
            • 1 year ago

            Most likely it is memory latency sensitive.

            • jihadjoe
            • 1 year ago

            Dawbench simply measures how many simultaneous instances of an actual audio workload (voices of polyphony) a CPU can run before crackling (or basically high DPC latency) is detected.

            IMO it’s way more useful than a raw DPC latency test because it’s meaningless to measure DPC latency unless its taken while actual audio workloads are running.

            • Srsly_Bro
            • 1 year ago

            The intended use is Intel CPUs. That’s my problem. Your problem is not being kind on a day like today.

          • rutra80
          • 1 year ago

          Agreed. Thanks for the DAW test but it would be even nicer to have it real-world related. It is quite easy to kill any system with a couple of plugins (super-precise spectrogram with huge overlapping FFT windows sizes comes to mind) and it is much more a real-world situation than thousands voices of polyphony.

        • Srsly_Bro
        • 1 year ago

        I’d like them to make software that isn’t effectively vendor locked, too. Making music and modern programs at the same time.

      • chuckula
      • 1 year ago

      Well they needed something to balance out Cinebench.

        • Srsly_Bro
        • 1 year ago

        Thanks for the kind response, bro!

      • thx1138r
      • 1 year ago

      shhhh! AMD’s DAWbench numbers will improve considerably once ZEN 2 hits mainstream with it’s wider AVX2 implementation. It’s a niche benchmark but I saw keep it in there.

    • Cooe
    • 1 year ago

    “The tradeoff is that Threadripper WX chips can fall far behind in other tasks. That inconsistency leads to Threadripper WX chips’ lower-than-might-be-expected standings in our overall value chart, even as they excel in a few particular workloads.”

    Much of this is because you test in Windows & not Linux Jeff.

    Windows has TERRIBLE NUMA support. In Linux (which is very likely to be the operating system of choice for people actually buying these CPU’s), the 2990WX matches or beats the i9-7980XE in most scenarios.

    [url<]https://www.phoronix.com/scan.php?page=article&item=2990wx-linux-windows&num=1[/url<]

      • chuckula
      • 1 year ago

      1. Linux is most certainly not the choice of most people buying the 2990X. It’s just not true.
      2. I’m all about running some real AVX-512 workloads on Linux to see what the Embree with Blender can do under Clear Linux. I’m willing to be after that you’ll stop complaining about how Windows ruins the Cinebench score.

      3. I’d love to see an intelligent response from the usual AMD defense squad telling us all how wonderful having massively unbalanced NUMA configuration is while pretending that Skyake X sucks when it flat-out wins numerous benchmarks against 32-core Threadrippers [b<]without[/b<] even having its major features turned on. But then again, there must be a reason why Cascade Lake is already racking up wins against Epyc 2 despite AMD practically giving them away for free to HPC users.

        • Goty
        • 1 year ago

        [quote<]But then again, there must be a reason why Cascade Lake is already racking up wins against Epyc 2 despite AMD practically giving them away for free to HPC users.[/quote<] How do you think Intel won contacts in the face of superior competition in the past? Given that AMD has evidently managed to raise base clocks while doubling core count to go along with any architectural gains, I doubt Intel is going to retain any performance crowns outside of lightly-threaded tasks.

      • Chrispy_
      • 1 year ago

      Can concur.

      We tested a 2990WX for raytracing (windows client) and it didn’t scale well.
      We also have Epyc 7401P servers and they run unix-based ESX like a champ, better than 2P Xeons.

      • Klimax
      • 1 year ago

      More like, SW doesn’t care for whatever reason to use NUMA API under Windows, while under UNIX they do. (Maybe for along time it wasn’t needed…)

    • derFunkenstein
    • 1 year ago

    Maybe they should have called it the Core i11-9980XE. It’s certainly cranked up to 11.

    This CPU is only like 15% more expensive than the 2990WX, but for the performance it’s almost just a flat-out great value. There’s nothing the Threadripper outright wins like the 9980XE outright wins DAW Bench. It’s comedy how poorly AMD shows, and it makes Avid seem almost sane for only certifying Intel CPUs. More 5X the plugin capacity at 96kHz with 128 samples of latency. That’s like 1.333 milliseconds of latency, if my math doesn’t suck, basically taking the computer out of the equation entirely at an insanely high bitrate.

    That’s intended to be high praise, not a backhanded compliment. These HEDT platforms usually completely disregard value, but in some cases it’ll triple up on a Core i7-9900K and more-than-double the 9900K on occasion, even with its clock speed disadvantage.

    edit: I saw the Ryzen 5 2600 make an appearance. Hope to see a full review on that soon. It’s a consideration for a PC I’m speccing out for my nephew.

    • ermo
    • 1 year ago

    Good on intel for *almost* managing to muzzle “the segmentation goblins”. Clock rates and performance look solid. Prices look a wee bit high, but not unreasonably so if your field benefits from the particulars of intel’s architecture.

    I maintain that intel would be better off selling all its enthusiast chips on an OC-able platform with unofficial ECC support like AMD does. The people who need verified support will go for the real Xeons anyway.

    intel could bin the very best chips (in terms of perf/watt) under the Xeon brand and then allow people to buy the chips that don’t quite meet the cut as K/X chips at slightly reduced (compared to Xeon) prices on enthusiast platforms with an unlocked multiplier and *still* profit from the arrangement.

    • chuckula
    • 1 year ago

    Intel making these chips, sending them for reviews weeks in advance, and lifting the NDA today is just a last minute panicked stunt to distract us from the Awesome Powar of half operational Epyc chips!! [url<]https://www.anandtech.com/show/13594/amd-launches-highfrequency-epyc-7371-processor[/url<]

      • MOSFET
      • 1 year ago

      Let’s just go back to facts and the sharing of information in the comments, please.

      • thx1138r
      • 1 year ago

      I hadn’t seen that chip, thanks for the link. Sure it’s got half the cores, but it’s still got the full cache and memory bandwidth and about 1Ghz uplift in speed. Nice to see AMD rounding out their EPYC line and increasing competition.

        • MOSFET
        • 1 year ago

        Here’s a better link than AnandTech (imo), especially once they put up actual performance numbers soon:

        [url=https://www.servethehome.com/new-amd-epyc-7371-frequency-optimized-processor-launched/<]STH EPYC 7371[/url<]

          • Shobai
          • 1 year ago

          “Self-Taught Haters” ? Who’re they?!

    • chuckula
    • 1 year ago

    18 cores of fail
    Even with three more dice glued
    Still never Epyc

      • Klimax
      • 1 year ago

      One downvote more to match post…

        • abiprithtr
        • 1 year ago

        It had slid to -21, so I upvoted to make it -20.
        Need 2 more upvotes, guys !

          • Mr Bill
          • 1 year ago

          +3 Done.

    • Ninjitsu
    • 1 year ago

    Quick suggestion, if the OC and non-OC bars could be interchanged in the future, then that would be great – my eyes were going to the light blue bar for quite a while before i realised that those were not stock results. It just catches the eye more.

    • dragontamer5788
    • 1 year ago

    Aside from AVX512, something to note about Intel CPUs is that they support an extremely fast PDEP and PEXT assembly instruction: executing in just 1-clock tick. AMD Ryzen supports PDEP and PEXT too, but it takes 18-cycles to execute. Basically, its only there for compatibility.

    For most people, this means nothing. But for some high-performance programmers and some weird cases (ex: Stockfish uses PEXT to calculate where bishops can move), this may be useful.

    256-bit AVX and AVX512 support are the big stuff for i9-9900xe and other -X CPUs from Intel.

      • Star Brood
      • 1 year ago

      The guy at Best Buy will definitely try to sell granny on the PEXT.

        • Beahmont
        • 1 year ago

        The guy at Best Buy will definitely try and sell granny a HEDT too!

    • Chrispy_
    • 1 year ago

    TR 2950X still utterly dominating the scatter plot, unless you absolutely [i<]must[/i<] have that extra 20% performance for an extra 150% cost.

      • ermo
      • 1 year ago

      It’ll be interesting to see what a non-gimped-AVX2 7nm TR chip will be able to do against the competition once it arrives.

      • Kretschmer
      • 1 year ago

      For many use cases, $1,500 is insignificant compared to people or software, and time is valuable. I’m sure Intel will sell as many of these as they can fab.

        • Krogoth
        • 1 year ago

        The people who would be interested aren’t getting a Core i9 though. They would be opting for the Xeon W version which easily commands an additional $500-1000 premium.

          • chuckula
          • 1 year ago

          I’m sure Intel is crying about this situation.
          All the way to the bank.

            • Krogoth
            • 1 year ago

            The lack of UDIMM ECC support on regular X299 platform is going continue to be a sore spot that AMD can exploit for potential HEDT customers.

            Intel already rectify the artificial binning on their PCIe controller with Skylake-X refresh. They are probably going to make UDIMM ECC support official on X299’s successor.

            • chuckula
            • 1 year ago

            ECC memory on Threadripper isn’t some perfect solution that’s taking over the market.

            Hell… half the time the machine won’t even boot: [url<]https://www.phoronix.com/scan.php?page=news_item&px=Threadripper-2-ECC-DDR-Fail[/url<] Not too much of an advantage, although theoretically the memory has an even lower probability of experiencing errors since it never stores anything.

            • Krogoth
            • 1 year ago

            For professionals and prosumers, ECC support can make or break a deal.

            Intel is already finding out the hard way that their nonsensical market segmentation is catch up to them. They used to offer UDIMM ECC support on their high-end desktop platforms before Nehalem-era. When they moved memory controller onto the CPU with Nehalem. They used UDIMM ECC support as a way to justify their SP Xeon SKUs.

            • ptsant
            • 1 year ago

            So your link is about rebranded memory from a manufacturer I’ve never heard of (Nemix) and bought without first checking the [extensive] compatibility list of the motherboard.

            I know firsthand the problems early Ryzen had with memory, but the example you bring here is ridiculous. Try harder.

            • Bauxite
            • 1 year ago

            Don’t confuse 1 guys amatuer experience with reality. Phoronix doesn’t even know how to buy server ram apparently. Nemix, WTF? If they had at least tried micron or kingston I wouldn’t be facepalming.

            Anyone with a clue knows you want b-die for Zen cores, and you can pick up [b<]Samsung[/b<] unbuffered ECC modules all over the place, even newegg. They are "only" rated at 2400 (because jedec is stuck in 2012, along with intel's officially allowed xeon speeds) but will run faster than whateverthehellvendor "nemix" is. 2933 fully populated on X399, 8x16.

            • abiprithtr
            • 1 year ago

            NEMIX, the RAM maker (or brand or the shipper) seems to be the issue. Evidences are stated below.

            1.) Michael says on his own forum ([url<]https://www.phoronix.com/forums/forum/hardware/processors-memory/1058206-the-amd-threadripper-ecc-ddr4-2666-testing-that-wasn-t[/url<]) that he is trying to RMA the RAM and the product is "[i<]Unfortunately discontinued everywhere I looked. And this RMA'ing with Nemix doesn't look like it is going to pan out as they keep questioning me when I am trying to RMA it.... [/i<]. 2.) Phoronix also says, in the link you have given, that the RAM sticks are re-labelled ones (with overhanging stickers over the old labels no less!). 3a.) Looking at the relevant newegg page linked by Phoronix ([url<]https://www.newegg.com/global/in-en/NEMIX-RAM/about[/url<]), many users have reported RMAs going through other companies. And, there's a surprisingly high number of RMAs (or other kind of returns). 3b.) And many of the positive reviews about this RAM are 1- or 2-liners, as if they were written by the employees / shippers of this RAM themselves. 4.) The RAM works fine but has some "performance anamolies" particularly with Ubuntu, as per Level1Techs. [url<]https://www.youtube.com/watch?v=5u6DY8On1XA[/url<]. Into around 14:30. This may or may not be related to the issue Phoronix is running into. But that must be OK, cos as you say in another post of yours: "[i<] Linux is most certainly not the choice of most people buying the 2990X. It’s just not true. [/i<]" Let me get back to my work now.

            • Waco
            • 1 year ago

            ECC is a deal-breaker for many, and you’re reaaaally stretching on that BS link.

        • Goty
        • 1 year ago

        [quote<]I'm sure Intel will sell as many of these as they can fab.[/quote<] All three of 'em!

        • K-L-Waster
        • 1 year ago

        Hmm, when you consider that you’re within spitting distance of affording *two* 2950X systems once you factor in the CPU cost difference and the motherboard cost premiums, the math gets a bit more complex.

        E.g. “Do I get one honkin’ system for animating and rendering, or do I get a TR 2950X system for rendering *plus* a 2920X system for my animator so they can work on the next scene while the renderer is crunching away?”

          • Kretschmer
          • 1 year ago

          Not really. If your people and software are expensive enough, the $1-2K won’t matter much.

            • Krogoth
            • 1 year ago

            Not to penny-pitching bean counters/PHB-types.

          • Beahmont
          • 1 year ago

          Two systems, double the power draw, double the software costs, double the space, still slower on individual tasks.

          Maybe in a shear volume case where you can keep both systems fed at max capacity it would make sense, but the extra software licenses can easily be more than the 9980XE system costs in total. It would be a very interesting use case where the math works out for 2 systems over 1.

            • Klimax
            • 1 year ago

            Also extra synchronization over network.

            • K-L-Waster
            • 1 year ago

            [quote<]It would be a very interesting use case where the math works out for 2 systems over 1.[/quote<] As I suggested, 3D animating. Any time the system is rendering, the animator is twiddling his/her thumbs waiting for it to be available... unless you give the animator their own box. (Which is what most 3D shops do.)

        • Laykun
        • 1 year ago

        This is a common misconception. Companys in production don’t generally just buy the absolute best equipment because the price pales in comparison to human or software license costs. If it’s shown that spending the extra money can SAVE time and thus money more than the premium paid because there’s a bottleneck there that needs to be addressed then yes, the cost is justified but prudent bean counters have to have rationales for purchases before they can happen.

          • Kretschmer
          • 1 year ago

          I was giving a short answer in a comments discussion. Obviously most shops won’t spring for overspecced PCs because “why not they’re cheap LOL.*” But there are many industries and applications where time is money, and saving a billable hour or two a day or pushing out a few more units pays for that $1-2K within a week or less. I’ve been on projects where the billable rate was so high that it would have made sense to assign me a dedicated IT person to prevent downtime, because saving several billable hours a week would have covered their entire compensation. That’s not a “need for speed” application, but the same concept applies.

          *One exception is government departments with “use it or lose it” budgets who often end up splurging at the end of the year just to spend.

        • Chrispy_
        • 1 year ago

        Maybe, but when you’re buying 50 of these at once, that’s a big difference in cost that needs to be approved and when one of the beancounters mentions that you can hire two more staff for that price, you suddenly find yourself doing two more inductions for new staff.

        Software licensing seems to be getting much lower these days, too. In enterprises where the really expensive software is mandatory, the eye-watering list prices are [i<]always[/i<] heavily discounted, and then everything is either cloud or network licensed from a shared pool of licenses, so you may be sharing an individual license between 2-200 people.

        • ptsant
        • 1 year ago

        Well, these people can also buy a $30K workstation with ECC and 2x Xeons if performance is that critical.

        I haven’t seen situations where people didn’t care about price, but I of course agree that it will sell well.

    • homerdog
    • 1 year ago

    In some of the tests on page 3 OC triples performance lol.

      • BillyBuerger
      • 1 year ago

      Yeah, the AIDA64 FP tests have the stock 9980XE way down the list. Nowhere near the 7980XE stock. OC then puts it just above the 7980XE. Is the graph wrong? Is there something wonky that would put the already higher clocked 9980XE so far behind the 7980XE until OC’d?

        • Jeff Kampman
        • 1 year ago

        This is a result of an AVX-512 offset that was mistakenly left in place after our overclocked testing. I’m working on updating the numbers now.

          • Jeff Kampman
          • 1 year ago

          I’ve uploaded new graphs with the correct performance figures. You may need to super-refresh (Ctrl+F5) or clear your browser cache to see them.

    • ermo
    • 1 year ago

    +1 for the moniker “[url=http://harrypotter.wikia.com/wiki/Gringotts_Head_Goblin<]the segmentation goblins[/url<]"

    • Krogoth
    • 1 year ago

    Any word on these chips contain hardware-level mitigations for Meltdown/Spectre like the Coffee Lake refresh a.k.a Core 9xxx family?

    I’m also more curious to see power consumption and thermal characteristics when you get time to measure it.

      • Jeff Kampman
      • 1 year ago

      I have the power numbers already; I’m just graphing them as we speak. Impressively enough, the 9980XE only consumes about 12 more watts under Blender versus the i9-7980XE at stock clocks for the performance improvements it delivers.

        • ermo
        • 1 year ago

        What about the mitigation situation? Are you allowed to comment on that?

        • Mr Bill
        • 1 year ago

        I’m wondering if you saw this delta between base frequency TDP and overclocked TDP? (e.g. [url=https://www.anandtech.com/show/13544/why-intel-processors-draw-more-power-than-expected-tdp-turbo<]why-intel-processors-draw-more-power-than-expected-tdp-turbo[/url<]

        • YellaChicken
        • 1 year ago

        [quote<]Impressively enough[/quote<] Is it? I hope you realise who you're talking to Jeff 😉

    • NTMBK
    • 1 year ago

    Damn, that’s some serious performance.

      • Ultracer
      • 1 year ago

      Can’t wait for the i9-9XXX review.

Pin It on Pinterest

Share This