AMD’s Ryzen Threadripper 1920X and Ryzen Threadripper 1950X CPUs reviewed

AMD’s Zen architecture has proven impressively scalable. From four cores and four threads to eight cores and sixteen threads and everything in between, the basic eight-core Zen die (often referred to as a Zeppelin) has made a name for itself in PCs ranging from budget builds to the high end. Most impressively, Ryzen 7 CPUs matched Intel’s Haswell and Broadwell high-end desktop parts for productivity performance.

Although the power of Ryzen 7 chips proved impressive, they lacked some of the features that the most demanding users have come to expect from Intel’s platforms. Quad-channel-memory and gobs of PCIe lanes have, until recently, remained the calling card of Intel’s X99 and X299 platforms. The Ryzen Threadripper CPU family changes all of that.

  Cores Threads Base clock Four-core

boost clock

XFR

boost range

L2

cache

L3

cache

PCIe 3.0 lanes

from CPU

Memory

channels

Price
Ryzen Threadripper

1950X

16 32 3.4 GHz 4.0 GHz 200 MHz 8MB 32MB 64 4 $999
Ryzen Threadripper

1920X

12 24 3.5 GHz 6MB $799
Ryzen Threadripper

1900X

8 16 3.8 GHz ??? ??? $549

Thanks to its Infinity Fabric on-die and inter-die interconnect, AMD can join multiple Zeppelins on one package to scale past eight cores and two channels of memory. We’ve already seen that approach used to great effect in the Epyc family of server processors, and it’s the technique AMD is using to make the Ryzen Threadripper 1920X and Ryzen Threadripper 1950X. If you’d like to read more about the Zen architecture’s fine details, David Kanter’s run-down of Zen is a fine place to start. We’ll just be covering the broad strokes of how Zeppelins become Threadrippers.

The massive, Epyc-esque Threadripper package sews two Zeppelins together in a diagonal Infinity Fabric topology. AMD then taps two channels of DDR4 memory and all 32 lanes of PCIe 3.0 connectivity from each die to create a formidable multi-chip module. Of the 64 PCIe 3.0 lanes this stitching-together makes available, 60 will be available for motherboard makers to distribute across the various slots and ports of their products. The remaining four are reserved for the connection between CPU and chipset. The other two “dies” under Threadripper’s massive heatspreader are dummies that serve to maintain the integrity of the package. Overclockers will appreciate that AMD continues to solder the heatspreader to Threadripper CPUs, a practice that Intel recently abandoned with its Skylake-X chips.

Because each eight-core Zeppelin brings two channels of memory to the table that are connected across the Infinity Fabric, Threadripper’s memory-access characteristics are inherently non-uniform. Non-uniform memory access is a familiar concept in the server world, but regular folk haven’t had to worry about NUMA on the desktop at least since AMD’s own Quad FX platform put up a fight against the Core 2 Extreme QX6700 a decade and change ago.

Basically, when a naive application performs a memory operation on a NUMA system, it may find that some accesses are serviced more quickly than others. Some applications might not care about this difference, but other, more latency-sensitive software could experience reduced performance. Operating system support for NUMA should generally serve to cushion these bumps, but it doesn’t change the basic fact that not every application will tolerate inconsistent memory access characteristics well.

As a result, AMD offers Threadripper owners two choices for memory-access modes in its Ryzen Master utility. By default, the CPU will present itself as a uniform memory access node, which AMD calls “Distributed Mode.” Despite what its topology might suggest, this mode does not offer uniform memory access latencies. Instead, Distributed Mode accepts the latency differences inherent to the Threadripper multi-chip module as a tradeoff for delivering maximum bandwidth from all four memory channels. AMD says applications with unknown or unpredictable threading behavior will, on average, still experience a performance benefit from distributing memory accesses over all of the available channels.

In its testing, AMD also found that the performance of some applications, games especially, can benefit from running the Threadripper MCM as two separate NUMA nodes, each with what is essentially dual-channel memory. In this “Local Mode,” the operating system will have free rein to keep applications running on cores closest to the memory where their data resides. Local Mode will let the operating system work as hard as possible to keep an application’s workload and memory within one node before assigning work and data to the other node. For the most part, then, Local Mode should present an application with the lowest memory latencies possible at the cost of potentially lower bandwidth.

AMD further discovered that presenting some mainstream applications with 32 threads, including some DiRT and Far Cry titles, caused application-breaking problems. For those applications, the company is offering a third switch in its Ryzen Master utility called “Legacy Compatibility Mode.” The goal of this mode is to both reduce the number of active threads on the system and to lower memory access latencies. AMD achieves this by leaving simultaneous multi-threading on, turning off the active cores and threads on one Zeppelin, and putting the system in NUMA mode. In this mode, the Threadripper 1950X will behave like a single Ryzen 7 1800X, while the Threadripper 1920X will behave like a Ryzen 5 1600X. Even though all of the active cores and threads on one Zeppelin are disabled in this mode, the sleeping die will still leave its memory controller powered up in case the system needs to access the memory pool connected to the remote node.

If this is all too much to keep track of, AMD’s Ryzen Master utility offers two pre-built profiles that are pretty foolproof. “Creator Mode” sets legacy compatibility mode to off, SMT to on, and the memory mode to Distributed. Game Mode turns on Legacy Compatibility Mode, and that’s it. After a given profile is applied in Ryzen Master, the system will restart with the chosen mode applied.

The inclusion of a Game Mode in Ryzen Master’s settings might lead some to expect that they’ll need to flip between these settings as they switch between work and play. That’s probably not the case. AMD says that the average performance improvement it observed across all 60 of the games and all three of the resolutions it tested with Game Mode (read: Legacy Compatibility Mode) was 4%. Some games benefited more, while others actually experienced performance reductions. The company says the following applications are especially helped by Game Mode:

  • Civilization VI
  • Call of Duty: Modern Warfare Remastered
  • Heroes of the Storm
  • Gears of  War Ultimate Edition
  • DOTA 2
  • Watch Dogs
  • Thief
  • Hitman: Absolution
  • Fallout 4

AMD says it made Game Mode as an option for Threadripper customers who simply couldn’t tolerate getting anything but the best performance possible from their systems. Honestly, I don’t think most people are going to want to deal with profile fiddling and restarts in exchange for a few more FPS here and there. Be sure to see whether any games you play are on the list above, to be sure, but we felt the most appropriate way to test Threadripper was in the mode AMD ships it in by default. 

 

Threadripper in the flesh

Any discussion of Threadripper cannot overlook the fact that these are enormous chips, quite unlike anything that’s graced a desktop system in recent memory. Each Threadripper CPU ships in a plastic protective shell that’s part of an elaborate and (in theory) durable box.

While my business-major instincts are hitting on words like “economies of scale with Epyc” when thinking about the sheer size of the Threadripper multi-chip module itself, it’s also ingenious marketing. Most people have an inescapable lizard-brain instinct that bigger things are better, and even compared to Intel’s LGA 2066 CPUs, the massive Threadripper chips evoke an undeniable reptilian satisfaction when you handle them.

Perhaps because of the risks of dropping a ZIF chip this large onto a socket the wrong way, Threadripper CPUs rest in a semi-permanent plastic mounting frame. The assembly pops, locks, and drops into an enormous new socket called TR4. Unlike the pin grid array of Socket AM4, TR4 is a 4094-pin land grid array that’s like a much larger version of Intel’s modern sockets. TR4 is outwardly similar (if not identical to) AMD’s Socket SP3, which plays host to Epyc server chips. Despite that similarity, the two chip families can’t be switched between sockets. You’ll still see artifacts of this shared lineage in the “SP3 SAM” stamp on the TR4 retention bracket and pin protector, though.

Installing a CPU in Socket TR4 will be a new experience for anybody not accustomed to data-center hardware. AMD includes a torque wrench (that’s also a Torx driver) in its Threadripper packaging. Use of this wrench should be considered mandatory for installing and uninstalling Threadripper CPUs. Should you lose yours, the torque spec is 1.5 newton-meters.

The first step is to loosen each Torx screw using the order printed on the retention bracket itself (3-2-1 to loosen, 1-2-3 to tighten). Once the bracket is open, it reveals a spring-loaded receiver frame that can be released by pulling gently on the two blue tabs toward the top of the socket.

Once the frame is vertical, builders will need to gently slide out a clear plastic external cap before sliding the Threadripper assembly into the guide rails on the frame. The carrier will slide into the receiver until it gently clicks into place at the bottom of its travel.

Once you feel the click, make sure to take the gray plastic pin protector off the socket before swinging the receiver frame and CPU down onto the socket. The receiver frame will also click once it’s locked back into place. Finally, lower the retainer bracket back onto the CPU and work around the three Torx screws with the torque wrench. I found that doing a half-turn on each screw in sequence was the best approach. The torque wrench will click once you’ve tightened each screw adequately.

Overall, this sequence sounds more intimidating than it is. Just watch an instructional video a couple of times before proceeding, and whatever you do, don’t plop the CPU directly onto the pins or remove it from the plastic carrier frame. I’d also keep the external cap and pin protectors stored away in your motherboard box, since you won’t ever want to leave a socket this large exposed for any length of time. Repositioning bent pins on this baby will likely not be possible without damaging others in the socket.

Applying thermal paste to a heat spreader this large also requires more planning than the usual “grain-of-rice-and-squash-it” method. The fine folks at GamersNexus have detailed several different methods of paste application and their effects on performance. The short version is that AMD recommends applying five dots of compound on the heat spreader , but I found it easiest to apply a generous blob of compound (about three times as much as one might normally apply for LGA 115x CPUs) smack in the middle of the “Z” of the Ryzen logo on the heat spreader. Regardless of the method you use, you will need much more thermal compound than with smaller CPUs to achieve full pasting of your liquid cooler’s cold plate.

Once the CPU is installed and pasted, the next step is getting the right cooler on top. Although air coolers for Threadripper are in the pipe, we imagine most will want to use liquid coolers for the most socket clearance and best performance. AMD includes a Socket TR4 mounting bracket in the box for Asetek-based liquid coolers from a wide range of manufacturers, including Corsair, NZXT, Thermaltake, and Arctic Cooling. This bracket installs easily on unadorned Asetek coolers like the Thermaltake Water 3.0 Ultimate AMD sent us for testing. Other, fancier coolers, like the Corsair H115i that I often use in testing, have preinstalled brackets that will pop off the pump head with a firm counterclockwise twist.

The Threadripper cooler bracket has asymmetric lugs that are narrower at the top of the socket, so builders will want to make sure the top of the pump head and the top of the bracket are in agreement. Once you have the bracket on your liquid cooler of choice, follow the tightening order near each screw on the bracket until they’re all snug, and you’re set.

 

Touring the X399 platform with Gigabyte’s X399 Aorus Gaming 7 motherboard

Ryzen Threadripper CPUs may be impressive in their own right, but a CPU is nothing without a motherboard to go with it. AMD’s vehicle for its high-end platform is the X399 chipset, which bristles with USB and PCIe lanes of its own to go with the 60 available from every Ryzen Threadripper SoC.

A great deal of connectivity comes from the Threadripper package beyond PCIe lanes. Eight USB 3.0 ports are tied to the CPU itself. The X399 chipset provides eight lanes of PCIe 2.0, eight SATA ports, two USB 3.1 Gen2 ports, another USB 3.0 port for rear-panel connectors, four internal USB 3.0 headers, and six USB 2.0 headers.

We performed our Threadripper testing using the Gigabyte X399 Aorus Gaming 7. This beastly board taps most every resource X399 has to offer, and its black-and-gray color scheme serves as a neutral canvas for today’s RGB LED-bedecked builds. More conservative builders can turn the onboard LEDs off, but honestly, on high-end systems like this, the extensive illumination the Gaming 7 offers will help communciate that you don’t have just any old motherboard in your mid-tower.

The enormous TR4 socket gets flanked with eight DIMM slots on the Gaming 7. These slots are all RGB LED-illuminated, and they use my favored one-clip design for easy insertion and removal of DIMMs. The board offers memory multipliers for DDR4 DIMMS ranging past 3600 MT/s, but I expect most will be happier to hear that ECC memory is supported by this board.

Its back panel offers a whopping eight USB 3.0 ports powered by the Ryzen SoC itself, plus USB 3.1 Gen2 Type-A and Type-C ports powered by the X399 chipset. Audio output comes courtesy of a Realtek ALC1220 codec, and an Intel wireless adapter offers 802.11ac and Bluetooth connectivity right from the board. Killer’s E2500 Gigabit Ethernet adapter handles wired networking duties.

The Gaming 7 distributes a Threadripper CPU’s 60 PCIe 3.0 lanes across four PCIe slots and three M.2 connectors. The first and fourth slots from the left in the picture above offer a full 16 lanes to the CPU, while the second and fifth tap eight of those lanes. The third slot provides four lanes of PCIe 2.0 from the X399 chip.

Two M.2 22110 slots with four lanes of PCIe 3.0 hooked up nestle between these physical X16 slots. Both connectors are shrouded with heatsinks backed by pre-applied thermal tape. I was initially concerned that one would only be able to install rare M.2 22110 devices in these slots, but Gigabyte helpfully includes a bag of M.2 standoffs in the box that can be added to the board for use with shorter drives. Once an M.2 2280 drive is secured to a standoff, one can simply peel off the protective plastic on the heatsink’s thermal pad and screw the heatsink back into the M.2 22110 standoff. Handy.

A third M.2 2280 slot with its own dedicated heatsink sits beneath the chipset heatsink. One doesn’t need a separate standoff for use with this slot unless the plan is to install a shorter drive than the typical 80-mm gumstick. I’d use this slot as the default location for an M.2 2280 drive if I were building with the Gaming 7, since it’s located a ways from any hardware that might cause heat-soaking issues with the heatsink above.

The nice thing about all of these PCIe and M.2 slots is that not a one shares its lanes with any other device on the motherboard. What you see is what you get, and that should be the case with every Ryzen Threadripper CPU. Even better, only data from the eight SATA connectors and some assorted peripherals should have to traverse the four PCIe 3.0 lanes from the chipset to the CPU. All of the M.2 devices and PCIe slots could, in theory, operate at full bandwidth without risk of a bottleneck.

I cannot overstate how big of a relief this arrangement is compared to the complicated lane-sharing that can arise on today’s Intel motherboards. Intel’s X299 chipset can be tapped for up to 24 PCIe 3.0 lanes on top of the 28 or 44 lanes direct from an LGA 2066 CPU, but those lanes will all have to traverse the DMI 3.0 connection from chipset to CPU, and it’s possible that adding M.2 devices to a board will disable random SATA connectors or cause other minor headaches.

The beefy heatsink on Aorus’ beefy power-delivery circuitry

Other nice features on the Gaming 7 include eight four-pin fan headers with automatic three-pin or four-pin fan detection, nine separate temperature sensors, gold-plated ATX and EPS power connectors, and a front-panel USB 3.1 Gen2 header.

All told, the X399 Aorus Gaming 7 has practically everything one could ask for in order to take advantage of a Ryzen Threadripper CPU’s impressive resources. Aside from one minor early teething issue that the company explained how to work around from the get-go, my experience with the Gaming 7 was flawless. At $389.99, this is not a cheap board, but it lands about in the middle of the range for X399 mobos right now. I’d heartily recommend it to anybody looking for a reasonably-priced foundation for their Threadripper CPU.

Now that we’ve seen the X399 platform in its totality, let’s get to our performance testing. 

 

Our testing methods

As always, we did our best to deliver clean benchmarking numbers. We ran each benchmark at least three times and took the median of those results. Our test systems were configured as follows:

Processor
AMD Ryzen Threadripper 1950X AMD Ryzen Threadripper 1920X
Motherboard Gigabyte X399 Aorus Gaming 7
Chipset AMD X399
Memory size 32GB
Memory type G.Skill Trident Z DDR4-3600 (rated) SDRAM
Memory speed 3200 MT/s (actual)
Memory timings 15-15-15-35 1T
System drive Intel 750 Series 400GB NVMe SSD

 

Processor
Intel Core i9-7900X Intel Core i7-7820X
Motherboard Asus Prime X299-Deluxe
Chipset Intel X299
Memory size 32GB
Memory type G.Skill Trident Z DDR4-3600 (rated) SDRAM
Memory speed 3200 MT/s (actual), 3600 MT/s (actual)
Memory timings 15-15-15-35 2T (DDR4-3200), 16-16-16-36 2T (DDR4-3600)
System drive Samsung 850 Pro 512GB

 

Processor
Intel Core i7-6950X Intel Core i7-7740X
Motherboard Gigabyte GA-X99-Designare EX Gigabyte X299 Aorus Gaming 3
Chipset Intel X99 Intel X299
Memory size 64GB 16GB
Memory type G.Skill Trident Z DDR4-3200 (rated) SDRAM G.Skill Trident Z DDR4-3866 (rated) SDRAM
Memory speed 3200 MT/s (actual) 3200 MT/s (actual)
Memory timings 16-18-18-38 2T 15-15-15-35 2T
System drive Samsung 960 EVO 500GB

They all shared the following common elements:

Storage 2x Corsair Neutron XT 480GB SSD

1x HyperX 480GB SSD

Discrete graphics Nvidia GeForce GTX 1080 Ti Founders Edition
Graphics driver version GeForce 384.94
OS Windows 10 Pro with Creators Update
Power supply Seasonic Prime Platinum 1000W

Our thanks to AMD, Intel, Gigabyte, Corsair, and G.Skill for helping us to outfit our test rigs with some of the finest hardware available. Some additional notes on our testing methods:

  • Unless otherwise noted, we ran our gaming tests at 2560×1440 at a refresh rate of 144 Hz. V-sync was disabled in the driver control panel.
  • For our Intel test system, we used the Balanced power plan, as we have for many years. Our AMD test bed was configured to use the Ryzen Balanced power plan that ships with AMD’s chipset drivers.

Our testing methods are generally publicly available and reproducible. If you have questions, feel free to post a comment on this article or join us in the forums.

 

Memory subsystem performance

To get a sense of how Threadripper’s quad-channel memory architecture affects performance in the move from the AM4 platform to X399, we rely on AIDA64’s built-in memory benchmark suite.

Compared to the Ryzen 7 1800X and its two channels of memory, the Threadrippers both nearly double the 1800X’s performance in writes and copies, but fall a bit short of that increase in reads. It’s interesting to observe that bandwidth generally doesn’t scale with core count for Threadripper, as it does to some degree for Skylake-X. Still, applications that found themselves memory-bandwidth-constrained on the AM4 platform get plenty more throughput to play with on X399.

We also tested memory latency using AIDA64’s built-in benchmark. It should be noted that the above results are a worst-case scenario for latency, thanks to our choice to run the chip in its default Distributed Mode, or as a UMA node. AMD officially says near memory access will be around 78 ns on the near die for a given application and around 133 ns for the far die, for an average latency of about 105.5 ns. Our use of DDR4-3200 with fairly typical 15-15-15-35 1T timings cuts a few nanoseconds off that figure, but on average, it appears applications should expect considerably higher memory latency from Threadrippers in their default distributed mode versus Skylake-X and its mesh architecture.

Some quick synthetic math tests

AIDA64 offers a useful set of built-in directed benchmarks for assessing the performance of the various subsystems of a CPU. The PhotoWorxx benchmark uses AVX2 on compatible CPUs, while the FPU Julia and Mandel tests use AVX2 with FMA.

Normally, we would let these results pass without comment, but AIDA64’s CPU Hash test gets a curious (and massive) speedup on Ryzen CPUs. That’s because the Zen architecture has what seems to be little-publicized support for Intel’s SHA Extensions. These extensions permit hardware acceleration of some of the SHA family of algorithms, and CPU Hash uses SHA-1 as its algorithm of choice. SHA-1 isn’t particularly useful in practice any longer, but SHA-256 is, and the folks at SiSoft report similar speedups for that algorithm. AVX implementations of other SHA versions might help Intel processors close the gap, though.

The Threadripper 1950X’s 16 cores seem to allow it to go toe-to-toe with the wider-but-less-numerous AVX FMA units in the i9-7900X in the Julia and Mandel tests. The 1920X is more or less on par with the i7-6950X and i7-7820X here, as well. If there’s one spot where throwing more cores at the problem seems to have helped Threadrippers, this is it.

Now that we’ve seen how these chips stack up on a synthetic playing field, it’s time to let them out of the corral and see how they chew through real-world work.

 

Doom (Vulkan)
Doom‘s Vulkan renderer might not put the most stress on a single core, but when a game runs this fast, interesting things can still happen. We tested Doom with all of its settings maxed at 2560×1440 using the GeForce GTX 1080 Ti Founders Edition graphics card.


Vulkan is usually the great equalizer for Doom performance, so there’s not a ton of difference between the slowest and fastest CPUs in this test. Having more cores on tap is still good for a small boost in average frame rates for our many-threaded chips, though. Surprisingly, the Core i7-7820X delivers the worst 99th-percentile frame times in this test by a wide margin. In absolute terms, a 14-ms 99th-percentile frame time isn’t the worst thing in the world, but it’s a bit lackluster in this company.


Our “time-spent-beyond-X” graphs can be a bit tricky to interpret, so bear with us for just a moment before you go rocketing off to our productivity results or the conclusion. We set a number of crucial thresholds (or bins) in our data-processing tools—50 ms, 33.3 ms, 16.7 ms, 8.3 ms, and 6.94 ms—and determine how long the graphics card spent on frames that took longer than those times to render. Any time over the limit ends up aggregated in the graphs above. Those thresholds correspond to instantaneous frame rates of 20 FPS, 30 FPS, 60 FPS, 120 FPS, and 144 FPS, and “time spent beyond X” means time spent beneath those respective frame rates. We usually talk about these results as a proportion of the one-minute test runs we use to collect our data. If that’s still too much to bear, just understand that more time spent in these graphs means worse performance.

If even a handful of milliseconds make it into our 50-ms bucket, we know that the system is struggling to run a game smoothly, and it’s likely that the end user will notice severe roughness in their gameplay experience. Too much time spent on frames that take more than 33.3 ms to render means that a system running with traditional v-sync on will start running into equally ugly hitches and stutters. Ideally, we want to see a system spend as little time as possible past 16.7 ms rendering frames, and too much time spent past 8.3 ms or 6.94 ms is starting to become an important consideration for gamers with high-refresh-rate monitors and powerful graphics cards.

While all of our chips spend a little time holding up the graphics card beyond 16.7 ms, those holdups should be practically invisible. For the fast-running Doom, the more interesting results can be found past 8.3 ms and 6.94 ms. The Threadripper 1950X and the Ryzen 7 1800X lead the pack here, while the i9-7900X, the i7-6950X, and the Threadripper 1920X trade blows. The i7-7820X spends over a second under 90 FPS overall, thanks to a strange (and repeatable) pattern of stuttering that’s absent from the Core i9-7900X. These stutters mean a gamer could have a potentially less smooth experience on the i7-7820X than any other chip here.

The same story continues past 6.94 ms, although our contenders are more evenly matched here. The Threadrippers and the Core i9-7900X all spend about four seconds of our one-minute test run on tough frames that take more than 6.94 ms to render, but the i7-7820X’s curious performance means it spends another second yet on that hard work—and it’s therefore the least smooth chip here. Just goes to show that even in this largely GPU-bound test, the CPU can still matter.

 

Watch Dogs 2

With the right settings, Watch Dogs 2 can be a CPU benchmark. It can also be a graphics benchmark. It’s the combination CPU and graphics benchmark. For this review, we enabled the game’s temporal filtering support, dialed up the graphics quality, and set extra details to 50% to produce the most punishing CPU test we could muster.


Even with these CPU-melting settings, Watch Dogs 2 remains mostly GPU-bound at 2560×1440. In both average FPS and our 99th-percentile metric of smoothness, the Threadrippers and the Core i9s are quite closely matched. A couple more FPS on average isn’t worth worrying about that much. Strangely, the i7-7820X bookends our results. This is one title where faster memory seems to help the chip, for whatever reason.


Our time-spent-beyond-X metrics reveal a wide gulf between the 16.7-ms and 8.3-ms buckets. The best thing that can be said for our test CPUs is that none of the Core i9s or Threadrippers spend an appreciable amount of time holding up the graphics card at the critical 16.7-ms mark. This should come as a relief for AMD, since Watch Dogs 2 has historically underperformed on Ryzen chips at lower resolutions. Any of these uber-expensive CPUs are a fine companion for the GTX 1080 Ti at 2560×1440.

 

Deus Ex: Mankind Divided (DX11)
Deus Ex: Mankind Divided‘s geometrically rich environments pose a challenge for CPUs at 1920×1080, so we were curious what would happen with one of the world’s most powerful graphics cards at 2560×1440 with our usual test settings.


As we saw with Watch Dogs 2, cranking the resolution of Deus Ex: Mankind Divided produces largely GPU-bound results. The Core i7-76820X and i9-7900X demonstrate ever-so-slightly higher performance potential, but the differences just aren’t that large. Both Threadrippers turn in better 99th-percentile frame times than the i7-7820X, though.


The most meaningful results in our time-spent-beyond-X graphs for DXMD arise at the 8.3-ms mark. Perhaps because it packs fewer cores into the same TDP, the Threadripper 1920X spends about a second less under 120 FPS than its bigger brother. The Core i9-7900X spends about a second less still under 120 FPS, and the i7-7820X with DDR4-3200 continues its frustratingly inconsistent performance by coming in just behind the i7-6950X. All of these chips will provide a satisfying gaming experience with the GTX 1080 Ti at 2560×1440 in this title, but the Intel chips are just a bit better.

 

Grand Theft Auto V
Grand Theft Auto V can still put the hurt on CPUs as well as graphics cards, so we ran through our usual test run with the game’s settings turned all the way up at 2560×1440. Unlike most of the games we’ve tested so far, GTA V favors a single thread or two heavily, and there’s no way around it with Vulkan or DirectX 12. In that way, it’s a perfect test of whether a CPU can keep the graphics card fed.


GTA V doesn’t play well with Ryzens in general, and that’s still evident at 2560×1440. Despite their lower average frame rates versus the Intel competition, the Threadrippers both deliver a fine 99th-percentile frame time, so it’s not all bad. The weird stutter problem we observed in Doom returns in GTA V for the i7-7820X, though, causing its 99th-percentile frame time to trail the rest of the pack significantly. These stutters are hard to see in-game, but they are present.


Although the i7-7820X’s stuttery performance is evident at the 16.7-ms mark, the bulk of its time is aggregated beyond 8.3 ms. Even so, the i7-7820X spends about three-and-a-half seconds less under 120 FPS compared to the Threadripper 1920X, and the Threadripper 1950X trails further behind yet. The fine performance of the Ryzen 7 1800X suggests these results might be mitigated by flipping the Threadrippers into Game Mode, but that’s an extra bit of fiddling that’s simply not necessary with the single-chip solutions here.

Even among the Intel chips, the i9-7900X performs slightly worse than the i7-6950X, core-for-core. I doubt it’s a noticeable difference, but it just goes to show that progress is not always forward.

 

Hitman (DirectX 12)
Hitman‘s DX12 mode can take advantage of every core and thread we can throw at it. We used the same max settings that we usually do for graphics-card reviews, but keeping a GTX 1080 Ti fed is a different task than handling a GTX 1080 at 1920×1080. Let’s see how these CPUs manage with it.


Here’s an example of why it’s essential to consider both average FPS and 99th-percentile frame times together. The Ryzen CPUs trail the Intel chips in the measure of overall performance potential that average FPS affords, but the Threadrippers and the Ryzen 7 1800X turn in better 99th-percentile frame times than the Skylake-X chips do.


Our time-spent-beyond-16.7-ms graph would seem to favor the Threadrippers, but none of these chips would be distinguishable at this threshold in the real world. The 8.3-ms mark is more meaningful here once again, and the Core i9 CPUs spend somewhat less time holding up the graphics card at this threshold. That result should translate to a somewhat smoother and more fluid experience with the Intel CPUs overall.

 

Crysis 3

It may be getting up there in years, but Crysis 3‘s lush “Welcome to the Jungle” level is still one of the most punishing tests of CPU gaming performance we’re aware of. We tested the game at 2560×1440 with its “Very High” preset.


With these settings, Crysis 3 seems to take advantage of every thread it can get, but the net result of more than 20 threads seems to be a wash at 2560×1440 with these settings. The only outlier—once again—is the i7-7820X, and only with DDR4-3600 memory.


Even with that weird 99th-percentile frame time, neither the i7-7820X or any other chip here spends an appreciable amount of time holding up the graphics card past the 16.7-ms mark. The 8.3-ms threshold shows that all of these parts are about equally matched here—the difference from top to bottom shows that the best and worst chips in this test are separated by about a second (except for the i7-7740X, which we should expect to lag in this heavily-threaded test).

Crysis 3 closes out our gaming results, and the numbers are largely positive for Threadrippers. In fact, I’m most worried about an Intel chip after seeing these numbers. The i7-7820X exhibits bizarre stuttering that the i9-7900X simply doesn’t, for the most part. We have a hunch why this might be, and it could involve the speed of the chip’s on-die fabric versus that of its main memory. We’ll need to examine that hunch in a separate review and with more CPU-directed game testing, though.

With only a couple exceptions, AMD’s many-core chips deliver a similar gaming experience to that of the Core i9-7900X, even without Game Mode enabled. We’d be equally happy pairing a GTX 1080 Ti and a high-refresh 2560×1440 monitor with the X399 platform as we would be with X299. We figure if you’re a gamer with the scratch for a $2000-or-more PC before a graphics card enters the equation, a $500-or-more monitor and a $700 graphics card probably aren’t much of a stretch. The flip side is that if you’re thinking about 1920×1080 gaming, these server-class chips with complex on-die interconnects will probably perform worse at 1920×1080 than cheaper, simpler chips with fewer cores and higher clocks. Spending $2500 or more on one of these machines for 1920×1080 gaming alone isn’t just unnecessary, it’s daft.

Now that we know Threadrippers have game, let’s give them plenty of threads to rip in our productivity tests.

 

Javascript performance

The usefulness of Javascript benchmarks for comparing browser performance may be on the wane, but these collections of tests are still a fine way of demonstrating the real-world single- or lightly-threaded performance differences among CPUs.

 

No surprises here: a Zen core is a Zen core, and even the 4.2 GHz XFR boost on the Threadripper CPUs doesn’t move the needle in these benchmarks compared to the Ryzen 7 1800X. Still, the distance between the Threadrippers and Skylake-X chips in this test aren’t that large—about 11% to 12% at most. Since the Core i7-7740X is basically a Core i7-7700K, it exhibits that chip’s world-beating single-threaded performance here.

Compiling code with GCC

Our resident code monkey, Bruno Ferreira, helped us put together this code-compiling test. Qtbench records the time needed to compile the Qt SDK using the GCC compiler. The number of jobs dispatched by the Qtbench script is configurable, and we set the number of threads to match the hardware thread count for each CPU.

The Threadripper chips come out of the multithreaded performance gate with a win over the Core i9-7900X. Once you start throwing 20 threads or more at this problem, though, bottlenecks seem to appear elsewhere. Regardless, Threadrippers are the fastest things going overall.

File compression with 7-zip

7-zip’s compression test is another example where the battle between more cores and faster cores generally comes out in the wash. Decompress some files, though, and the Threadrippers scorch the i9-7900X, never mind the rest of the contenders here.

Disk encryption with Veracrypt

Encryption is another task that benefits from many cores, and in the accelerated AES portion of our Veracrypt testing, the Threadrippers enjoy a healthy win (although the difference between the 1920X and 1950X isn’t large). Use an algorithm that can’t be accelerated, though, and the 1920X and 1950X pull even farther ahead of the pack.

 

Cinebench

The Cinebench benchmark is powered by Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. The test runs with a single thread and then with as many threads as possible.

Intel’s chips take a clear win in the single-threaded portion of Cinebench, but that’s not why we’re here. Multi-threaded performance is where it’s at for CPU rendering, and the Threadrippers easily take the top spots in this test.

Blender rendering

Blender is a widely-used, open-source 3D modeling and rendering application. The app can take advantage of AVX2 instructions on compatible CPUs. We chose the “bmw27” test file from Blender’s selection of benchmark scenes to put our CPUs through their paces.

Blender is another workload that heavily favors Threadrippers. The Threadripper 1950X shaves an amazing 22% off the i9-7900X’s time to completion, while the 1920X matches it. Just goes to show that if your app can take advantage of as many cores as possible, Threadrippers could allow you to do the same task in less time or more work for a given amount of time. We also call that “more bang for the buck.”

Handbrake transcoding

Handbrake is a popular video-transcoding app that recently hit version 1.0. To see how it performs on these chips, we’re switching things up from some of our past reviews. Here, we converted a roughly two-minute 4K source file from an iPhone 6S into a 1920×1080, 30 FPS MKV using the HEVC algorithm implemented in the x265 open-source encoder. We otherwise left the preset at its default settings.

Although the x265 encoder should take advantage of AVX2 instructions, that alone isn’t enough to let the Core i7-7820X and the Core i9-7900X outpace the Threadripper competition. The 1950X comes very close to matching the i9-7900X, while the 1920X is in a dead heat with the i7-7820X. Should the x265 developers add AVX-512 support to the encoder, this performance picture could change, but the Threadrippers are as good as the best chips out there with today’s software.

 

CFD with STARS Euler3D

Euler3D tackles the difficult problem of simulating fluid dynamics. It tends to be very memory-bandwidth intensive. You can read more about it right here. We configured Euler3D to use every thread available from each of our CPUs.

The extra bandwidth and cores afforded by Threadrippers helps elevate their Euler3D performance beyond that of the Ryzen 7 1800X, but even the i7-7820X can open a significant lead over the Threadripper 1950X. The i9-7900X and i7-6950X are even faster still.

It should be noted that the publicly-available Euler3D benchmark is compiled using Intel’s Fortran tools, a decision that its originators discuss in depth on the project page. Code produced this way may not perform at its best on Ryzen CPUs as a result, but this binary is apparently representative of the software that would be available in the field. A more neutral compiler might make for a better benchmark, but it may also not be representative of real-world results with real-world software, and we are generally concerned with real-world performance. Within those constraints, Skylake-X chips still seem to be the superior platform for this kind of work.

Digital audio workstation performance

One of the neatest additions to our test suite of late is the duo of DAWBench project files: DSP 2017 and VI 2017. The DSP benchmark tests the raw number of VST plugins a system can handle, while the complex VI project simulates a virtual instrument and sampling workload.

We used the latest version of the Reaper DAW for Windows as the platform for our tests. To simulate a demanding workload, we tested each CPU with a 24-bit depth and 96-KHz sampling rate, and at two ASIO buffer depths: a punishing 64 and a slightly-less-punishing 128. We then added VSTs or notes of polyphony to each session until we started hearing popping or other audio artifacts. We used Focusrite’s Scarlett 2i2 audio interface and the latest version of the company’s own ASIO driver for monitoring purposes.

A very special thanks is in order here for Native Instruments, who kindly provided us with the Kontakt licenses necessary to run the DAWBench VI project file. We greatly appreciate NI’s support—this benchmark would not have been possible without the help of the fine folks there. Be sure to check out their many fine digital audio products.

In the DSP test, the Core i9-7900X and Threadripper 1950X are neck-and-neck at our most demanding settings, and the i7-7820X and 1920X are also locking horns. Relax the buffer depth, and the Threadrippers take the lead. One quirk of this test was that the Threadrippers were competitive with SMT on, but they performed best with SMT off. Some reading around suggests that SMT or Hyper-Threading can often be a source of reduced performance in audio applications, so the SMT-off numbers are the results we’re presenting for the DSP benchmark. We note this as something to be aware of if you’re considering a Threadripper for this kind of work. The Intel chips didn’t seem to mind either way.

The VI test isn’t as rosy for the AMD corner. While it’s hard to say for sure, we suspect this test tends to favor low memory latency, high cache bandwidth, single-threaded performance, and raw clock speed at our test settings. Skylake-X chips have higher performance in all of those areas, and it seems to show in our results. The dominating performance of the Core i9-7900X is no fluke in these tests, either. We retested it several times to be sure, but the gap remained.

Readers will rightly perk up at a gap like this one, and when we see such a gap between two otherwise closely-matched parts, we do our best to get to the bottom of it. We tried overclocking the Threadrippers, to no effect. We flipped SMT off, to no effect. At the suggestion of one reader, we even played with the number of active cores in Reaper, also to no effect.

More testing may be required to pinpoint the source of this bottleneck, but we aren’t alone in seeing it. Scan Pro Audio observed similar DAW performance with Threadripper, and the folks there have published an exhaustive analysis of why the chips might perform the way they do. The crux of the matter seems to be that if you can tolerate higher buffer depths (greater than 256), lower sampling rates, and ultimately higher latency, Threadripper may be competitive with Skylake-X chips for this type of workload. Using our demanding settings, however, the monolithic design of the Intel chips makes them superior—and in the case of the i9-7900X, far superior—for the most demanding virtual instrument fanatics.

Power consumption and efficiency

We now know that Ryzen Threadripper CPUs offer a ton of performance, but that’s not much good if their energy consumption causes their owners to run up a ton of kilowatt-hours on their power bills. We can get a rough idea of how efficient these chips are by monitoring system power draw in Blender. Our observations have shown that Blender consumes about the same amount of wattage at every stage of the bmw27 benchmark, so it’s an ideal guinea pig for this kind of calculation.

Since a joule is simply a watt expended over a second, we can use this convenient fact to estimate the total task energy in kilojoules expended over the course of each CPU’s Blender run. So that this data can be more readily understood, we’ve plotted it using one of our famous scatter charts. The most efficient chips complete the task in as little time as possible while expending the least energy, so the winner will be toward the origin of the chart.

Slicing up the data this way, the Threadripper 1950X finishes our Blender workload far faster than any other chip here. Despite its relatively high system power draw, the chip gets done fast, so its total task energy makes it the most efficient among our contenders. The Threadripper 1920X and Core i9-7900X both need a little longer to complete the job, but they’re consuming just as much power under load as the 1950X, so their efficiency is less favorable. The Core i7-7820X may take longer than the 1920X and i9-7900X to finish the job, but its total power consumption isn’t any higher than those chips. In fact, the i7-7820X is the efficiency winner compared to the Ryzen 7 1800X despite the Skylake chip’s higher power draw.

In any case, the Threadripper 1950X boasts impressive efficiency to go with its impressive performance. Its high power draw will be of minor concern for folks whose workloads are continuous, like gamers and streamers. Programmers and artists will find that their compiles and renders will finish both quickly and frugally, though.

 

Conclusions

Before we issue a verdict on the Ryzen Threadripper 1920X and Threadripper 1950X, it’s time once more to condense our results using our famous value scatter plots. We take the geometric mean of all of our real-world test results and plot that figure against the retail prices of the chips at hand.


The Ryzen Threadripper 1950X and Ryzen Threadripper 1920X are remarkable capstones for AMD’s CPU renaissance. For the same price as the Core i9-7900X, the 1950X can usually match and sometimes handily outperform the Skylake-X CPU in our real-world productivity benchmarks. In our measure of estimated task energy using the Blender bmw27 benchmark, the 1950X was both much faster and more frugal than the Skylake-X chip. That’s a heck of a way to break back into high-end desktops.

As the 1950X’s index in our full breadth of tests indicates, however, its sheer performance potential isn’t enough to let it take the absolute performance crown. That’s because of its somewhat split personality in digital audio workstation tasks. The 1950X performed quite well in my DAWBench DSP testing once I turned off SMT, but SMT or no, the 1950X seemed to hit a wall well before Skylake-X parts in our DAWBench VI testing. Even so, this is one of the only tests in our suite where the $1000 Core i9-7900X was clearly superior to the AMD competiton. Take DAWBench VI out of the equation (as we did in an alternate scatter above) and the results fall more where you might expect.

The Ryzen Threadripper 1920X, on the other hand, packs a more inconsistent performance wallop against the eight-core Core i7-7820X. In tasks like rendering and compiling code, the 1920X takes the lead. In other tasks, like transcoding and digital audio, the i7-7820X holds its own or beats out the 1920X. Whether those wins justify the 33% higher price tag of the 1920X will depend on your individual needs. Our overall performance index hides these nuances.


Gaming performance in our test configuration is another bright spot for Threadripper. Going by the performance potential suggested by average frames per second, Intel’s chips pull a bit ahead of Ryzen CPUs across the board, even at 2560×1440. In our 99th-percentile FPS measure of delivered smoothness, though, Threadrippers are on par with the Core i7-7900X. The frustratingly inconsistent i7-7820X should be considered an outlier at this stage. We’ll need to perform further testing on that chip to try and pinpoint why its 99th-percentile frame times don’t match its considerable performance potential.

Threadripper CPUs are only one half of the X399 platform, of course. Motherboards are equally important. Unlike my bumpy experience with the AM4 platform in its infancy, though, Socket TR4 and the X399 chipset seem quite mature already. Once I learned how to install a Threadripper CPU, I got up and running without any instability or unpleasantness from my Gigabyte X399 Aorus Gaming 7 mobo. I also didn’t experience any drama getting my 32GB quad-channel kit of DDR4-3200 RAM up and running on that board, another marked improvement in user-friendliness.

The virtues of X399 don’t stop there, either. It’s refreshing not to have to think about mind-bending PCIe lane-routing diagrams and ports going dark with different Threadrippers, since every TR4 CPU will function the same way with every X399 motherboard. Lights and bling and gaming pretenses of those boards aside, that ease of use will be important for semi-professional or pro users without precious time to waste.

Even if multi-GPU configurations are on the wane for gaming, X399 offers a unique canvas to folks who need gobs of PCIe lanes for multiple graphics cards, storage devices, capture cards, or fast network cards. Even if I have trouble imagining a use for every one of them, 60 PCIe 3.0 lanes from the CPU on an ATX motherboard isn’t a figure Intel’s Core X CPUs will be able to match this generation. The X299 platform can offer more chipset PCIe lanes to close the gap, to be sure, but those chipset lanes will be bottlenecked by the relatively paltry bandwidth of the DMI 3.0 link between CPU and PCH. Threadripper has enough PCIe lanes direct from the CPU to avoid this bottleneck.

AMD Ryzen Threadripper 1950X

August 2017

Ultimately, recommending high-end CPUs isn’t an all-or-nothing process. First and foremost, folks just shouldn’t buy $1000 CPUs for gaming or lightweight desktop tasks. I imagine the people who know they need a $1000 chip are already scribbling out the necessary return-on-investment calculations for their particular workloads. Others will want to carefully consider whether their needs are best served by AMD’s copious PCIe lanes, high core count, and prodigious memory bandwidth, or by the slightly higher per-thread performance, the future potential of AVX-512 support, and the equally prodigious memory bandwidth of Intel’s Skylake-X parts.

If the particular applications you depend on do catch the waves Threadripper is making in high-end desktop performance, AMD has put together a platform that’s nearly flawless for surfing them. The cherries on top for enthusiasts and professionals alike are touches like ECC RAM support and soldered heat spreaders that simply aren’t available on Intel’s high-end desktop chips for now. The Ryzen Threadripper 1950X generally offers more for the money than the Intel competition, and that makes it a TR Editor’s Choice.

Comments closed
    • Mr Bill
    • 2 years ago

    An excellent review, its got me wanting a 1900X with a quad channel motherboard as a starter. The geek in me is super impressed by the long list of features for future device expansion and the lack of market segmentation limits.

    Not to mention the clear indications that with sufficient cooling and improvements in process technology this design has legs…
    [quote<]AMD's in-house team of extreme overclockers tried their hand at putting a Threadripper 1950X under liquid nitrogen last night, and the net result was a roughly 5.2 GHz all-core overclock. With 16 cores and 32 threads churning away at those speeds, the chip produced a Cinebench all-core score of 4188. [/quote<]

    • xigmatekdk
    • 2 years ago

    There’s a reason your x299 systems showed no gain with DDR4 3600. Your AIDA64 latency is way too high at that speed. People with 7900x and 7820x usually get around ~54ns with 3600 RAM. You can read more about it here:
    [url<]https://hardforum.com/threads/skylake-x-core-i9-lineup-specifications-and-reviews.1933735/page-29[/url<] And 7820x's Turbo Boost 3.0 causes stuttering with MSI x299 boards. They still haven't release a BIOS to fix it yet LMAO. This reviewer also suffered the same problem until he turned TB3.0 off: [url<]http://www.eurogamer.net/articles/digitalfoundry-2017-intel-skylake-x-review-core-i9-7900x-i7-7820x-i7-7800x-i7-7740x[/url<] MSI continues to disappoints.

    • Mr Bill
    • 2 years ago

    It would be interesting to show the multithreaded tests (such as cinbench) in a graph of ‘score vs number of threads’ graph along with or in addition to a bar graph. Could put in some 1:1 lines in for reference for multicore efficiency.

    • Sacco_Belmonte
    • 2 years ago

    I find the audio test was conducted on the edge.

    Latencies at 64 and 128 samples are too tight for any system. Not only for the CPU but also for RAM and buses. Nobody realistically uses 128 or less. Not for heavy projects anyway.

    These tests would be better conducted at 256 and 512 samples latency.

    Also. I would like to replicate the tests here with my Ryzen 1800x at 4Ghz, but there is no info about that either. How many Kontakt instances, what loaded on them?

    I would like to have a project I can try in Reaper if possible.

    • tally
    • 2 years ago

    i7 7820x is buggy with the Turbo Boost 3.0. You have to manually turn it off via BIOS to fix the stuttering.

    • xigmatekdk
    • 2 years ago

    GPU bottleneck on a CPU test. Cmon brah

    • Bumper
    • 2 years ago

    Up until now I’ve read threadripper as threadreaper. ¯\_(ツ)_/¯

      • UberGerbil
      • 2 years ago

      I like to read it as “three-dripper”

    • ronch
    • 2 years ago

    Somewhere in a dark office in the US where almost everyone has left for the weekend sits a tall man in his cubicle, glaring at his computer monitor in the dark, sipping coffee to keep himself awake after a busy day designing electric cars while reading an AMD CPU review His name is Jim, Jim Keller.

    While the days spent at that other company now seems like a distant memory, he remembers all too well the inner workings of the project he lead, having spent similar overtime nights at that other company’s office with green cubicles, and now he feels gratified after seeing the fruits of his labor.

    Finished reading the review, he gives a slight nod of satisfaction, switched off his monitor, reclines in his chair, and takes a short nap.

    • Shouefref
    • 2 years ago

    Thank you. The article has the information I was looking for.

    • ludi
    • 2 years ago

    Great review and well worth the wait.

    • msroadkill612
    • 2 years ago

    That NUMA has little effect, is a good outcome. It show that UMA works well, as one hopes.

    • green
    • 2 years ago

    is there some more detail on the 7-zip test?
    file size, number of files, etc

    • Mr Bill
    • 2 years ago

    What happened to the i7-6950X on the DAWBench VI (96 KHz/64 Samples)? That seemed incongrous at least. Even the Ryzen 7 1800X was crushing it.

      • Dave Null
      • 2 years ago

      I’m pretty skeptical of these results. Besides the odd 6950x numbers, the 7740x only having 14% of the 1800x’s performance in this benchmark at 64 samples is pretty suspect, too. Clocks and IPC are king in the DAW world. These numbers don’t really pass the smell test.

        • Mr Bill
        • 2 years ago

        maybe the lower clock speed for the i7-6950X and/or memory latency? [quote<]we suspect this test tends to favor low memory latency, high cache bandwidth, single-threaded performance, and raw clock speed at our test settings.[/quote<] Edit: deleted "affects"

    • jackbomb
    • 2 years ago

    It’s insane how power hungry HEDT CPUs got to be in just a few short years.
    I’m still enjoying a 6c/12T IVB-E processor at 4.3GHz (1.15v). My $25 Hyper 212 EVO manages to maintain 80 degrees C running Prime95, and around 42-60 doing day-to-day stuff.

    • exilon
    • 2 years ago

    Jeff, when experimenting with a Skylake X system, I found that very high RAM speeds with stock mesh speed would kill minimum frames on some games. However I’d expect that to impact the 7900X as well, unless its 20% larger L3 mitigated that issue.

    Also, a user on reddit’s nvidia driver thread reported that he had stuttering on a 7820X that was resolved by the latest driver:

    [url<]https://www.reddit.com/r/nvidia/comments/6tn7fb/driver_38528_discussion_thread/dllz4mr[/url<]

    • ermo
    • 2 years ago

    Thanks for the excellent review Jeff. =)

    One question:

    In the Cinebench R15 charts, the R7 1800X scores 151/1513, while the TR CPUs score 168-169 in ST.

    For reference, you have the R7 1800X scoring 165/1647 in your [url=https://techreport.com/review/31979/amd-ryzen-5-cpus-reviewed-part-two/4<]R5 review[/url<] (i.e. 10% higher). Assuming the same ST boost clock, this score is also closer to the CB R15 ST scores I've seen for other Zen reviews with 4GHz + XFR core speeds. Do you have any idea why there'd be a 10% discrepancy between reviews for the R7 1800X in particular? Because it genuinely seems that the 4+ GHz XFR ST score ought to fall consistently in the 165-169 range...

    • Klimax
    • 2 years ago

    Yep, 14 cores needed to beat in some benches 10 core. (or match) There is a reason why those chips are fairly cheap and why Intel didn’t change their pricing structure.

    14 cores versus 14 cores won’t be pretty.

      • ermo
      • 2 years ago

      You’re most likely right.

      However, it’s hard to argue against the value proposition in the areas where Zen is on par or exceeds Skylake-X in perf/$, which is clearly why AMD is pricing TR as it is.

      Personally, I’m waiting for the yields to improve and the tweaks to land in Zen+. My guess is that AMD is already hard at work improving the frequency headroom and adding additional AVX2 performance along with other small optimisations.

      Then again, intel isn’t standing still either. It’s amazing what genuine competition will do to a previously stagnant market.

        • Klimax
        • 2 years ago

        Actually, we are not yet seeing any big changes in Intel plans. We see mostly adjustments. (Few months delta and maybe some MHz changes too)

        Actually if Intel will need to do larger changes or not will be known after follow-up for Zen. I am not yet convinced AMD can keep such large jumps in IPC.
        With Zen AMD already has cross between Sandy Bridge and Haswell. They effectively already used up most of tricks and things Intel used.

      • cynan
      • 2 years ago

      [quote<]14 cores versus 14 cores won't be pretty.[/quote<] Especially since none of the CPUs featured in this article feature 14 cores

        • Klimax
        • 2 years ago

        You’re right. I have misread 16 as 14. It is even worse. AMD needs 16 cores to beat 10? My point definitely stands.

          • thx1138r
          • 2 years ago

          Did you even look at the review? The 12-core Threadripper outperforms the 7900X in quite a few benchmarks. This makes the even cheaper 12-core a better option for certain workloads. My point is that we are beyond the point of “does X beat Y”, it’s more subtle than that now, it’s “does X beat Y at Z”.

    • ronch
    • 2 years ago

    Back when the FX practically sent AMD down to Davey Jones’s locker someone at Intel said something about Intel no longer expecting AMD to be much competition anymore.

    I’d REALLY like to recall who that Intel guy is (he’s just a random Intel minion, I guess) and what he thinks about all this fuss about Ryzen these days. Who knows, maybe he already bought Ryzen? 😉

      • UberGerbil
      • 2 years ago

      And back when the Opteron came out, there were commenters on this very website who were writing Intel’s obituary, absolutely certain it would be dead by 2010 at the latest. (I remember trying to point out how much money Intel had in the bank; even at their burn rate at the time assuming no one ever bought another Intel product, they’d last longer than that. But to no avail; some people would rather entertain delusions of the Empire collapsing at the hands of their beloved Rebel Alliance than do any kind of rational analysis).

      • Kretschmer
      • 2 years ago

      Even now they won’t be much competition. I’m sure Intel will maintain 80+% marketshare.

    • chuckula
    • 2 years ago

    True fact: The codename for each die in ThreadRipper is Zeppelin.

    So there are two Zeppelins in each ThreadRipper.

    Two Zeppelins –> Zeppelin two –> [b<]LED ZEPPELIN II![/b<] [quote<]You need liquid-cooling Ripper I'm not foolin' I'm gonna overclock ya To a TR reviewin' A-way down inside A-ripper you need threads I'm gonna give you some threads I'm gonna give you some threads Wanna whole lotta cores Wanna whole lotta cores Wanna whole lotta cores Wanna whole lotta cores You've been earning Ripper I been earning All them good Cinebenches Ripper, Ripper I've been yer-yearning A-way down inside A-ripper you need threads I'm gonna give you some threads I'm gonna give you some threads Wanna whole lotta cores Wanna whole lotta cores Wanna whole lotta cores Wanna whole lotta cores You've been liquid-coolin' And fanboys they've been droolin' All the rendering benchmarks, ThreadRipper They've been misusin' A-way, way down inside Ah AMD you need cash I'm gonna give you every dollar of my cash Wanna whole lotta cores Wanna whole lotta cores Wanna whole lotta cores Wanna whole lotta cores Way down inside... Ripper... You need... CORES!! Crunch for me Ripper I wanna be your fanboy man Hey! Oh! Hey! Oh! Hey! Oh! Keep it cooling, heatsink Keep it cooling, heatsink Keep it cooling, heatsink Keep it cooling, heatsink Ah keep it cooling, heatsink![/quote<]

      • ronch
      • 2 years ago

      If you say something is a fact, do you still need to say that it’s true? 😉

        • Mr Bill
        • 2 years ago
      • Mr Bill
      • 2 years ago

      Where’s that confounded bridge?

    • ronch
    • 2 years ago

    Still Ryzen for me 😉

    • Gadoran
    • 2 years ago

    Very good review….but. Same price with 7900x, same average performance, 10 cores vs. 16 cores. Something is wrong.
    It is nice to see a well performing cpu from AMD but honesly speaking a comparison between a cpu and another one with 60% more cores in a heavy multithreading bench suite is a little pointless. Under a standard usage (a good mix between many threads and few threads SWs) 7900x looks better, more balanced and faster in most of games.
    I have to assume an absolute lead of upcoming Intel many core lineup. Fortunately they will are soldered and the lower cpu temperature will drop the temperature related leakage, allowing lower power consumption.
    Anyway better than nothing, waiting for the monolithic die AMD solution. A good preview of AMD future if they will execute well.

      • shank15217
      • 2 years ago

      If you think so then go read a Epyc vs Xeon article

      • Anonymous Coward
      • 2 years ago

      Oh, car analogy time. Lets say you’re buying a heavy truck for the business. Would you hesitate to compare competitors where one had a straight-6 and the other a V8?

    • USAFTW
    • 2 years ago

    Thanks for the review Jeff! Great stuff.
    I’m glad first and foremost two see AMD not only being able to compete at the very high end, but doing so well. The power consumption of the 1950x looks particularly good next to the 7900x. And I’m glad details like the lack of soldering under the heatspreader for best conductivity on Intel stuff were mentioned. Paying top dollar for a high end platform while your CPU temperatures spiral out of control while the top of the heatspreader is barely warm to the touch (http://www.tomshardware.com/reviews/intel-core-i9-7900x-skylake-x,5092-11.html) is not cool.

    • Doctor Venture
    • 2 years ago

    Nice review! Well worth the wait. 😀

    My only regret, was that when Jeff was asking what we’d like to see in the review, I should’ve piped up, and offered my license key for ESXi (the 6.5U1 patch added support for Zen-based CPUs), along with some of the resource heavy Cisco and Juniper VMs I run all the time (The companies I’m a subcontractor for, have the necessary permissions to download the non-VIRL images).

    I thought it wouldn’t be fair to Jeff & Co. to plop a request like that in their lap on such short notice (especially since it would assume they had in-depth working knowledge of ESXi, Junos, IOS-XR, IOS-XE, NX-OSv, etc…), and they were already pressed for time.

    Maybe in a later review, they could include tests where they practically max out 128GB RAM, and fling massive amounts of network traffic between the VMs.

    One of these days (after a few more contract gigs), I intend on getting a system built around a 1950X with 128GB RAM, and 2 1TB Samsung 960 Pro NVMe SSDs (so I can just swap between an ESXi native system, and one with either Citrix or the latest Windows Server w/Hyper V). Maybe I could pass along the results to them then? Well, that, or just post it in the forum, so I can get mocked by people assuming I bought it as a gaming rig, lol.

    Anyway, TL;DR: Great review Jeff! Glad to see that ThreadRipper didn’t get zapped by the GCC segfault bug.

    • homerdog
    • 2 years ago

    Has anyone figured out why Skylake-X is so much more power hungry than Broadwell-E? It seems almost like a bug considering no such difference is present on the mainstream stuff.

      • Doctor Venture
      • 2 years ago

      Dunno about that, but the reason I’ve resisted getting on the Skylake-X and Kaby Lake-X bandwagon, has been the controversy about the TIM vs solder. Maybe part of the reason for the power draw is due to the early x299 motherboards, as well as the switch to an on-chip mesh, instead of a ring bus. I was surprised to see the quoted TDPs go as high as 140W for some of the parts.

      I guess my next gaming rig will either be an i7-7700K Kaby Lake with a nice z270 motherboard, or wait until Coffee/Cannon Lake roll out. :/

        • Kretschmer
        • 2 years ago

        The 7700K is a gaming beast. Worlds ahead of my old 3570K.

      • Unknown-Error
      • 2 years ago

      One possible reason atleast in Prime95 is the AVX-512. Few members at Hardforum.com (partcularly Juanrga) claims it acts sort of like a power virus utilizing the AVX-512 to the fullest. But when you look at Kaby Lake-X (7640x, 7740x) or any Intel CPU that doesn’t use AVX-512 efficiency is very good compared to AMD.

      • synthtel2
      • 2 years ago

      I’m betting all the doubled-up internal bandwidth to support AVX-512 is responsible for a lot of the extra power consumption in normal use.

        • Klimax
        • 2 years ago

        You forget 512-bit FMA unit. (Which gets pretty sure sued a lot in x264/x265 and Blender)

          • UberGerbil
          • 2 years ago

          Yeah, those codecs are a litigious bunch.

          • synthtel2
          • 2 years ago

          I didn’t forget it, it’s just that the FPUs themselves shouldn’t have much effect on power consumption when not in active use. When AVX-512 is active, it’s no surprise it’s power-hungry. What needs explaining is why it is even under lighter loads.

      • Klimax
      • 2 years ago

      AVX512 (FMA unit!), L1d path and using first gen 14nm. (IIRC Kaby and alter use newer version of it)

      • tipoo
      • 2 years ago

      Beaten to the punch, but to add detail, AVX-256 was already forcing lower than rated clocks because of the heat it produced, so I can only imagine AVX-512 would have a peak significantly higher.

    • homerdog
    • 2 years ago

    I would like to see some older CPUs at least in the gaming benchmarks on CPU reviews. Would like to know how my 3770K compares these days.

      • juzz86
      • 2 years ago

      Me too, however I treat the 7740X as equivalent of a 7700K in newer reviews, and use the 7700K review to compare to the older parts, as they went a fair way back 🙂

      I have just jumped from a 3770K to a 7700K myself, and there’s been a noticeable net benefit in the games I play.

        • homerdog
        • 2 years ago

        Do you have a >60Hz monitor?

          • juzz86
          • 2 years ago

          Yep, PG278Q mate.

      • itachi
      • 2 years ago

      Yeah and 6600k just to show, also 2500/2600k all of these OC.. hope they do this with 8700k review!

      This is more HEDT so.. You can always look at 7740 review if they did include 3770k

      Also sad to see no game mode on a few games at least, and same for other modes

      Also… CPU bound games like arma or starcraft 2 etc in CPU reviews you would think it would make sense!

        • Krogoth
        • 2 years ago

        Starcraft 2 and Arma are limited by being dual-theme and clockspeed is king for them. Your best bet are 7600K and 7700K. The 7740K is only a sup-par choice because it is a tied to a platform that cannot fully utilize.

    • Waco
    • 2 years ago

    I have zero reason to buy one of these, however, after reading this I’m still sorely tempted.

      • chuckula
      • 2 years ago

      Wait I thought there were 60 lanes of PCIe reasons to get one!

        • Waco
        • 2 years ago

        Sure, but now I’m considering desktop and NAS… 😛

      • Mr Bill
      • 2 years ago

      +3 I feel the same way!

    • Krogoth
    • 2 years ago

    Excellent powerhouse CPUs with plenty of I/O expandability but you can still game on. Threadripper has brought sorely needed competition for the power user/prosumer market. Hopefully, it will force price pressure on Intel’s HEDT SKUs.

    It is overkill for the majority of desktop users though.

    • K-L-Waster
    • 2 years ago

    With these products, the HEDT market is definitely no longer a game of “get [insert product name here] it’s the best forget the rest”. It’s now very solidly in the “well, depends, what are you going to do with it?” ballpark.

    I think we can also officially declare that AMD has inverted their business competitiveness. For the past couple of years their CPU line was a mess but the GPUs were in a dogfight with Nvidia. Now it’s turning into the reverse: their GPUs are falling behind, but they are seriously competitive with Intel in desktop and HEDT CPUs (with presumably the server space joining the party in the very near future).

      • tipoo
      • 2 years ago

      Makes me wonder if they only have the investment capital to fight the good fight in one pillar at a time.

        • K-L-Waster
        • 2 years ago

        Maybe.

        I suspect part of it on the GPU side may be the commitment to HBM. HBM has great absolute performance, but at a high cost, low availability, and for the moment limited sizes. AMD may simply have gone all-in on HBM too soon. It’s like they’ve designed the architecture around having lots of very high speed memory, but the cost, supply, and memory capacities simply aren’t there yet to allow it to shine.

        Unfortunately for Raj and the gang, that’s the kind of strategic decision that can’t be reversed quickly. As you suggest, if they had NVidia’s bank balance they could probably afford to develop a redesigned version around GDDR — for AMD, it’s probably too much expense for a medium term redesign.

        Nvidia is of course also using HBM in their high end cards, but those cost multiple thousands of dollars so the extra cost on the HBM can be easily recouped. In the gaming space they are still sticking with flavours of GDDR.

          • tipoo
          • 2 years ago

          And Nvidia is already making those split architectures, i.e there will be a compute Volta and a gaming Volta. The proof is in the pudding for AMD, where they make one architecture that does compute well, but also sell it as their gaming brand.

          I’m half wondering…If RTG spun out, would it get a new round of venture capital? If so, a few extra B’s to throw at GPUs would certainly help. I don’t think Vega even hit the big B, while Nvidia spent 3 billion on each new uArch.

    • Air
    • 2 years ago

    Thanks for the review. I think I’ll never have any use for this kind of CPU, but enjoyed reading it anyway.

    I would just like to point out that the sentence bellow is reversed, it’s a watt that is defined by a joule per second.
    [quote<]Since a joule is simply a watt expended over a second[/quote<] The way this is written makes it sound as if 1 J = 1 W/s EDIT: I interpreted that the wrong way, sorry.

      • JustAnEngineer
      • 2 years ago

      The article is correct.
      1 Joule = 1 Watt · 1 second

      Yes, that is mathematically equivalent to saying
      1 Watt = 1 Joule ÷ 1 second

        • Air
        • 2 years ago

        I know that, but i though he was saying that 1 J = 1 W / 1 s

      • Shobai
      • 2 years ago

      Temporal ‘over’ vs mathematical ‘over’? Perhaps read it as ‘across’, or ‘over the duration of’.

      [edit]

      I meant to mention that ‘expended’ is your clue to reading the sentence correctly: if he’d meant the mathematical ‘over’, he wouldn’t have written ‘expended’.

        • Air
        • 2 years ago

        Oh, I see it now. Thanks for the explanation and I’m sorry for the misunderstanding. It was caused by my bad english skills.

        I still think this is a weird way to write it (i kinda get a feel of “the definition of joule is…”), but I’m definitely not qualified to discuss english.

          • Shobai
          • 2 years ago

          Not a problem, and no need to apologise.

    • Ruiner
    • 2 years ago

    After reading the excellent ‘A Bridge Too Far’ here, I was hoping that Arma would end up in the gaming benchmark rotation, especially for CPUs.

      • drfish
      • 2 years ago

      I’ll keep pulling for it. 🙂 For now, the answer is still get an Intel CPU to the highest overclock you can manage and pair it was super-speedy RAM. Jeff says the i7-7740X does 5 GHz without any effort at all.

        • Philldoe
        • 2 years ago

        Dosn’t Arma 3’s frame rate have a lot to do with Server frame rate?

          • drfish
          • 2 years ago

          Yeah, I’d give the same CPU advice to people that run Arma servers too (except maybe skip the overclock).

      • Mr Bill
      • 2 years ago

      That was an interesting read. Any speculation as to whether the game developers made poor use of buffers and/or other other latency mitigation?

      • Ruiner
      • 2 years ago

      Wow, Arma engine PUBG it’s nipping at Dota2’s heels on Steam for player volume.

    • willyolioleo
    • 2 years ago

    Your sentence was cut off on page 3:

    [quote<]Killer's E2500 Gigabit Ethernet adapter handles[/quote<] what does it handle??? WHAT DOES THIS ETHERNET ADAPTER DO?????!?!?!?!?!?

      • morphine
      • 2 years ago

      Thousands of years from now after humanity was destroyed and re-built again, data archaeologists will spend their lives obsessed with this secret. WHAT DID IT DO?!

        • Redocbew
        • 2 years ago

        And right next to that they find something called “shortbread”? Featuring the paws of a cat? Dude, these people were weird.

        • ronch
        • 2 years ago

        This is precisely why we should scatter torn pieces of printed computer hardware data sheets all over the world for future archeologists to find and piece together! As a bonus, print an extra billion of those programming books as well so they’d know how to program that Athlon XP they dug out of a future buried civilization!

      • the
      • 2 years ago

      I’m pretty sure it did do something but crashed and burned before it tell the world. So maybe that that is what it actually does.

      • gmskking
      • 2 years ago

      It handles whatever you throw at it

      • ronch
      • 2 years ago

      Looks like you could proof read for Jeff. 😉

    • Cannonaire
    • 2 years ago

    You know, there are good reasons to read the TR review first, even for the times it takes a bit longer.

      • DoomGuy64
      • 2 years ago

      Except if you wanted to see gaming mode/gaming+multitasking numbers.

      [url=https://www.pcper.com/reviews/Processors/AMD-Ryzen-Threadripper-1950X-and-1920X-Review/Gaming-Performance-and-Mega-tasking<]PCper[/url<] however did those tests, and showed gaming mode increased performance, and NUMA mode increased gaming performance under heavy multitasking. UMA mode under heavy multitasking takes a big hit while playing games. TR(site) didn't test for this even though this is primarily what TR(CPU) will be used for, so this wasn't noticed. Considering that TR is a CPU meant for multitasking, this is a big deal that UMA mode can't handle games under heavy multitasking. Intel has the better CPU for gaming, especially while multitasking. TR is only good at work related multitasking, while multitasking with gaming will require some workarounds.

        • Waco
        • 2 years ago

        Intel boards in UMA mode suck similarly, so don’t blame AMD for that one. It’s just an artifact of memory interleaving versus explicit NUMA domains.

        Of course, that wouldn’t fit your anti-Tech Report rhetoric, so of course you won’t acknowledge it. I can’t even tell if you’re an Intel or AMD fanboy these days…you’re equally obnoxious all around when it comes to being honest.

        • synthtel2
        • 2 years ago

        Who’s buying Threadripper for gaming? AMD is promoting it that way, but that doesn’t mean it makes sense.

        NUMA mode seems like it should be an easy win for multitasking interactive loads, assuming the OS is clueful about it. It’s a beast of a CPU either way though, and if someone’s workloads are really that much happier with NUMA, it’s a good bet that they can figure out how to put it in NUMA mode.

          • derFunkenstein
          • 2 years ago

          Someone on this very site told me Threadripper was perfect for Dota streamers who wanted software encoding in their Twitch uploads. As if even 1% of all streamers are actually making a living at streaming.

            • synthtel2
            • 2 years ago

            How many cores can OBS typically fill up, then? The impression I’ve gotten from casual reading of comments here is that an 8C single-die Ryzen at hot clocks would be appropriate for that.

            Threadripper in NUMA mode seems like it could be handy if OBS is burning a ton of memory bandwidth, but 32bpp 4K60 raw is under 2 GB/s, and the compressed side will be much smaller. Even assuming a (probably overkill) 5x fudge factor for the compression algo itself, that’s under 20% of dual-channel DDR4-3200, and encoding should be mostly sequential, right? Someone’s probably done testing to get real-world numbers for this, but blacklisting Admiral properly means blacklisting PCPer, so I can’t look. On theoreticals alone, Threadripper looks like an awful lot of money for not much gain at streaming.

          • Mr Bill
          • 2 years ago

          [quote<]Who's buying Threadripper for gaming? AMD is promoting it that way, but that doesn't mean it makes sense.[/quote<] Same reason Intel released the Intel Core i9-7900X and Core i7-6950X... To have an AMD product at or near the top of those long gaming fps charts. [edit] fixed quotes

            • synthtel2
            • 2 years ago

            According to this review, neither team’s HEDT is even properly accomplishing that. Intel can claim halo, but they already could with mainstream by more of a margin than HEDT is giving them over mainstream.

            • Mr Bill
            • 2 years ago

            AMD needs to release a 64 core version for “gaming”/HEDT just to put in the final nail.

        • Cannonaire
        • 2 years ago

        I do agree that those numbers can be very important – particularly if your workload involves streaming at high quality while playing a demanding game.

        I would still rather see Tech Report’s numbers for what they [b<]have[/b<] tested first, and then fill in the gaps after. TR's explanations, writing, and test analysis are the best I know of (barring the deep dives you get from David Kanter). I also read PCPer.

        • derFunkenstein
        • 2 years ago

        PCPer’s numbers show what a waste of time it is. 4% faster here, 3% slower there…all in the margin of error.

    • crystall
    • 2 years ago

    Here comes the THREADRIPPAH!

    (sorry, couldn’t resist)

      • DPete27
      • 2 years ago

      The more times I read “threadripper” the more sick of it I get. Just sounds so……corny.

        • crystall
        • 2 years ago

        Well, if it were the title of a song from Judas Priest it would be fine. As a processor name on the other hand…

          • K-L-Waster
          • 2 years ago

          Great, now I have a lyric variant of “Painkiller” going through my head. Thanks a… well, actually it’s not so bad….

            • crystall
            • 2 years ago

            Faster than a Xeon
            Terrifying scream
            Locked into its socket
            With a crazy TDP
            Rides the NUMA monster
            Running games and apps
            Closing in with vengeance scoring high
            He is the threadripper
            This is the threadripper

            etc…

          • UberGerbil
          • 2 years ago

          Breakin’ Moore’s Law, Breakin Moore’s Law!

    • ptsant
    • 2 years ago

    Great, detailed and balanced review. Thanks!

    I find the single-threaded performance of Threadripper vs Ryzen very reassuring. There is no significant penalty for choosing the HEDT chip. I am also very happy to see the ECC support, which has been spotty with Ryzen. Power/perf is also exceptionally good.

    I can’t see this chip selling a lot at $1000, but it does show the quality of the design and provides a useful “halo” product for AMD. I believe everyone is waiting for the $500-600 part…

      • ultima_trev
      • 2 years ago

      I could see it being quite popular in professional workloads, although consumers are definitely better off with Ryzen 5 1600/1600X.

        • ptsant
        • 2 years ago

        The 1600X is probably the best AMD chip right now in perf/$ and one of the best chips out there right now.

      • Anonymous Coward
      • 2 years ago

      Its really 1900X vs 1800X where things get interesting. Does there continue to be no significant penalty when there are no extra cores, and almost no extra CPU cycles, to lean on? Bandwidth vs latency.

    • shank15217
    • 2 years ago

    We’re ordering 12 EPYC based AMD dual socket systems for hardware refresh. A 2U box with 128 threads, 2 TB of ram and enough PCIe lanes for 24 NVMe drives without a pci bridge. That’s a seriously balanced architecture.

      • ptsant
      • 2 years ago

      I believe a single server must have more general purpose FPU units than an introductory GPU. You could probably emulate a Radeon 7700 using an FPU per GCN core. I have no idea why you would do that, but it could probably be done….

      I am also now wondering how many 8088 PCs at 4.77 MHz you can emulate. Probably enough for a small country.

      • msroadkill612
      • 2 years ago

      Impressive. Thats 96 of the 128 lanes used for nvme.

      I hate to think what the intel alternative involved, or their sales guys reactions.

      Lets not forget folks, he gets 8 channel memory w/ epyc, & we saw here the doubling of ram bandwidth going from 2 chan ryzen to 4 chan TR.

      Congratulations on what seems a very wise decision.

      I shall re-configure one for you in my pleasant dreams – you can do the other 11.

      • GodsMadClown
      • 2 years ago

      That would make an excellent kubernetes cluster.

    • southrncomfortjm
    • 2 years ago

    Thanks Jeff, and congrats! Have a beer… and a nap.

      • biffzinker
      • 2 years ago

      The review I’ve been waiting for is up. Thanks Jeff, it’s appreciated.

      • derFunkenstein
      • 2 years ago

      Yeah, he’s earned it, that’s for sure. Imagine this review, on launch day, because AMD had sent him a review kit on time. TR Editor’s Choice for a $1000 CPU on launch day. AMD could have had that.

        • southrncomfortjm
        • 2 years ago

        I just have a hard time believing that TR is number 251 on AMD’s list of places to give review products to. What 250 sites/mags are better than TR?

          • derFunkenstein
          • 2 years ago

          It boggles my mind hole. I can name a few with more traffic but can’t really find anything objectively better. That’s why I’m still here and that’s why I wait for TR’s reviews.

          • Voldenuit
          • 2 years ago

          Youtube shills. At least, from AMD’s perspective.

            • tsk
            • 2 years ago

            No matter what way you angle it, it makes more sense for AMD to give those 250 packages where they’ll get most exposure. Linus Tech Tips for example has 3.7M views on their four threadripper videos, I don’t know how many clicks this Article will get, but I expect less than 50K.

            • Waco
            • 2 years ago

            I believe that’s a very low estimate…but I’d wager the folks that read TR are far more influential than the [i<]gamerz[/i<] who watch Youtube reviews.

            • Jeff Kampman
            • 2 years ago

            Can we please put this whole bit to rest? I do not harbor any hard feelings for AMD for distributing the hardware like they did or how they did, and like I’ve said before, I didn’t care about getting a Pelican case with my name on it or a commemorative paperweight or other promotional fluff. We got what we needed and produced the results we needed to produce with it.

            Other sites and channels have bigger audiences than we do, for sure, but we are still among the largest 100% independent sites out there and there is value in that. Hopefully this review and our prompt coverage of the Radeon RX Vega prove that.

            • southrncomfortjm
            • 2 years ago

            Okay, fine. We’ll drop it until the next time it happens.

            • Inkling
            • 2 years ago

            I totally agree with Jeff’s comment, but since industry reps and sponsors read this it should be pointed out that The Tech Report’s CPU and GPU reviews generate traffic in the 100’s of thousands of uniques–often over a million. Just because there is no “view counter” in the corner like YouTube doesn’t mean we don’t have a similar audience and influence as many top “influencer celebrities.”

            • tsk
            • 2 years ago

            Maybe there should be a view counter then 🙂

            • DPete27
            • 2 years ago

            I don’t see why not. The Internet/Society is all just a popularity contest nowadays anyway. That said, Adam’s comment clearly shows they have the ability to collect such data and use it to promote themselves to tech companies.

      • TwoEars
      • 2 years ago

      Was about to say the same thing. Take is easy Jeff, don’t burn yourself out. Relax for a bit, go swimming or take a bicycle ride in a park or something.

      • Mr Bill
      • 2 years ago

      [url=https://www.youtube.com/watch?v=5WLcNkuftcc<]Hoo![/url<] . . . [edit]Because the TR TR review is finally up![/edit]

    • smilingcrow
    • 2 years ago

    For non business users buying this I wonder how many hours per month the CPU will get loaded to more than 50% of its potential?

      • thx1138r
      • 2 years ago

      These are not server CPU’s, the load-factor is pretty much irrelevant. HEDT processors are about getting compute intensive tasks out of the way quickly so users can be more productive.

      For my own (software development) case, I would only do a complete build of a software project a few times a day for example, but when I do, I need that process to be quick and I’d also like to be able to continue to use my computer when that’s happening, and having lots of (Intel or AMD) cores would help a lot.

      • travbrad
      • 2 years ago

      Couldn’t you say the same about almost all CPUs? My 6700K sits idle most of the time but that doesn’t mean I don’t like to have that fast performance when I need it. The same is true for Threadripper. It’s just going to be used for mostly different workloads.

      • ronch
      • 2 years ago

      I think what you mean is how much can folks who don’t really need all those cores but still get TR just because it’s cool will actually utilize it properly and not just have it idle. Home users who transcode videos, play games that max out say 4-5 cores while streaming and all that, and say, run Folding@Home will load up all those cores pretty much often. So it’s not about whether it’s used for business or not but rather the user’s usage scenarios.

Pin It on Pinterest

Share This