AMD’s Ryzen 5 CPUs reviewed, part two

When AMD’s Ryzen 5 CPUs first burst onto the scene, I was happy to find that those chips hung right with Intel’s latest-and-greatest midrange parts for smooth gaming performance. I wasn’t able to complete our productivity testing before that NDA lift, though, and the intervening couple of months or so has been a bit rough for yours truly outside of the TR labs. Those clouds are behind me, though, and I’m happy to be able to share the second half of our Ryzen 5 results now.

In the intervening time, AMD kindly completed my Ryzen 5 collection with the six-core, 12-thread Ryzen 5 1600 and the four-core, eight-thread Ryzen 5 1400. With these chips, I can give a complete picture of the Ryzen 5 family’s performance in 9-to-5 work. The TR labs were blessed with an Intel Core i5-7500 as part of Intel’s Optane Memory test rig, as well, and I’ve dutifully added it to our midrange CPU test suite. So equipped, we can get a great view of how Ryzen 5 CPUs stack up to Intel’s bread-and-butter quad-cores.

Model Cores Threads Base clock Boost clock Max XFR

headroom

L3 cache TDP Price
Ryzen 5 1600X 6 12 3.6 GHz 4.0 GHz 100 MHz 16MB 95W $249
Ryzen 5 1600 3.2 GHz 3.6 GHz 50 MHz 65W $219
Ryzen 5 1500X 4 8 3.5 GHz 3.7 GHz 200 MHz $189
Ryzen 5 1400 3.2 GHz 3.4 GHz 50 MHz 8MB $169

For a quick refresher, the Ryzen 5 1500X offers four cores and eight threads for $189, while the Ryzen 5 1600X offers six cores and 12 threads for $249. AMD also offers lower-priced variants of each of these CPUs with lower clocks and less XFR headroom. The Ryzen 5 1600 takes a 400-MHz haircut across the board, and AMD slices $30 off the price tag of the 1600X for the trouble. The Ryzen 5 1400 loses 300 MHz of clock speed compared to the 1500X and costs $20 less. It also loses half of the full Ryzen die’s 16MB of L3 cache.

As far as the general state of Ryzen goes, not much has changed since our initial Ryzen 5 review. AMD has promised an update to the AGESA base firmware that will unlock more memory overclocking options on AM4 motherboards, but that update isn’t set to arrive before the end of this month. We got a good look at the many-core Ryzen Threadripper hardware over the course of Computex, too, but shipping Threadripper products aren’t supposed to arrive until later this summer. Given these tranquil climes, it’s a fine time to talk about Ryzen 5 productivity performance. Let’s get to it.

Our testing methods

As always, we did our best to collect clean test numbers. We ran each of our benchmarks at least three times, and we’ve reported the median result. Our test systems were configured like so:

Processor Ryzen 7 1800X Ryzen 5 1500X AMD Ryzen 5 1600 Ryzen 5 1600X Ryzen 5

1400

Motherboard Gigabyte Aorus AX370-Gaming 5 Gigabyte AB350-Gaming 3
Chipset AMD X370 AMD B350
Memory size 16 GB (2 DIMMs)
Memory type G.Skill Trident Z DDR4-3866 (rated) SDRAM
Memory speed 3200 MT/s (actual) 2933 MT/s (actual)
Memory timings 15-15-15-35 1T
System drive Intel 750 Series 400GB NVMe SSD

 

Processor Intel Core i5-2500K Intel Core i5-3570K
Motherboard Asus P8Z77-V Pro
Chipset Z77 Express
Memory size 16 GB (2 DIMMs)
Memory type Corsair Vengeance Pro Series DDR3 SDRAM
Memory speed 1866 MT/s
Memory timings 9-10-9-27 1T
System drive Corsair Neutron XT 480GB SATA SSD

 

Processor Core i5-4690K Intel Core i5-7500 Intel Core i5-6600K Intel Core i5-7600K Intel Core i7-7700K
Motherboard Asus Z97-A/USB 3.1 Asus Prime B250-Plus Gigabyte Aorus GA-Z270X-Gaming 8
Chipset Z97 Express B250 Z270
Memory size 16 GB (2 DIMMs)
Memory type Corsair Vengeance Pro Series

DDR3 SDRAM

Kingston ValueRAM

DDR4 SDRAM

G.Skill Trident Z DDR4-3866 (rated) SDRAM
Memory speed 1866 MT/s
2400 MT/s
3200 MT/s (actual)
Memory timings 9-10-9-27 1T
17
15-15-15-35 2T
System drive Corsair Neutron XT 480GB SATA SSD Samsung 960 EVO 500GB NVMe SSD
 

Processor Intel “Core i7-6800K” (Core i7-6950X with four cores disabled)
Motherboard Gigabyte GA-X99-Designare EX
Chipset X99
Memory size 64GB (4 DIMMs)
Memory type G.Skill Trident Z
DDR4 SDRAM
Memory speed 3200 MT/s
Memory timings 16-18-18-38 1T
System drive Samsung 960 EVO 500GB NVMe SSD

They all shared the following common elements:

Storage 2x Corsair Neutron XT 480GB SSD
Discrete graphics Gigabyte GeForce GTX 1080 Xtreme Gaming
Graphics driver version GeForce 378.92
OS Windows 10 Pro with Creators Update
Power supply Corsair RM850x

Thanks to Corsair, Kingston, Asus, Gigabyte, Cooler Master, Intel, G.Skill, and AMD for helping us to outfit our test rigs with some of the finest hardware available. As a reward for making it past the dense tables above, you can gaze on some of our test hardware, first for the Ryzen 5 CPUs:

And our Ryzen 7 test platform:

And our Z270 test platform:

Some further notes on our testing methods:

  • The test systems’ Windows desktops were set at a resolution of 3840×2160 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.

  • For our Ryzen systems, we used the AMD Ryzen Balanced power plan included with the company’s most recent chipset drivers. We left our Intel systems on Windows’ default Balanced power plan.

     

  • The Ryzen 5 1400 wasn’t stable with DDR4-3200 speeds, so we had to dial it back a bit to DDR4-2933. That slight drop in speed explains the entry-level Ryzen 5’s slight performance deficit in some of our synthetic tests compared to its more expensive siblings. 

In response to popular demand, we’re re-benching AMD’s Ryzen 7 1800X and Intel’s Core i7-7700K with identical DDR4 speeds: DDR4-3200 at 15-15-15-35 timings.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Memory subsystem performance

Before we dive into our real-world results, it’s worth revisiting how much data each of these CPUs can move around from main memory and what the latencies associated with those actions are. To get that picture, we rely on AIDA64 Engineer’s built-in memory benchmarks. Our thanks to FinalWire for providing us with this indispensable tool.

Equalize memory speeds between Ryzen CPUs and Intel’s Skylake and Kaby Lake CPUs, and some interesting results fall out.  The Ryzen 5 and Ryzen 7 chips enjoy a lead over the Intel quad-cores in memory reads, a smaller lead in copies, and are more or less equal in writes.

It’s not all rosy, though, as Ryzen CPUs have about the same memory bandwidth regardless of the number of cores and threads in the socket. The eight-core, 16-thread Ryzen 7 1800X is splitting roughly the same amount of bandwidth among its cores as the four-core, eight-thread Ryzen 5 1500X is across the board (save for writes). Intel, on the other hand, gives its six-core i7-6800K a massive boost in bandwidth over its mainstream desktop CPUs. We’ll have to see how AMD’s Ryzen Threadripper platform and its quad-channel memory architecture affect these standings.

Raw memory bandwidth is important, but so is the latency of those accesses. All else being equal, the Ryzen chips lag every Intel CPU in this test by a wide margin. Those higher latencies, combined with the bandwidth pressure exerted by many hungry cores, could have an adverse effect on Ryzen CPUs’ performance in memory-intensive operations.

Some quick synthetic math tests

AIDA64 offers a useful set of built-in directed benchmarks for assessing the performance of the various subsystems of a CPU. The PhotoWorxx benchmark uses AVX2 on compatible CPUs, while the FPU Julia and Mandel tests use AVX2 with FMA.

The PhotoWorxx test shows how Intel’s superior AVX throughput on Broadwell-E, Skylake, and Kaby Lake chips, can effectively allow the company’s quad-core CPUs to match or outpace AMD’s six- and eight-core parts. The FPU Julia and Mandel tests further illustrate this deficit, where it takes eight AMD Zen cores to match the Core i7-7700K and the Core i7-6800K. The lesser Ryzens don’t have a chance.

 

Javascript performance

The usefulness of Javascript benchmarks for comparing browser performance may be on the wane, but these collections of tests are still a fine way of demonstrating the real-world single-threaded performance differences among CPUs.

 

Intel’s Skylake and Kaby Lake CPUs bunch up at the top of these charts thanks to their killer combo of high single-threaded performance and high clocks. What’s most interesting is how closely the Ryzen 7 1800X, the Ryzen 5 1600X, and Ryzen 5 1500X cluster, especially in the Jetstream benchmark. Whether you’re paying $190, $250, or $460 for a CPU in the Ryzen family, you can expect the same general snappiness in lightly-threaded tasks.

As for AMD’s non-X Ryzen 5s, the 1600 is still a fairly close match for its more expensive sibling. The Ryzen 5 1400 trails far behind the pack, though, only besting Intel’s Sandy Bridge and Ivy Bridge Core i5s in Octane. Meanwhile, the Core i7-6800K can’t put up much of a fight against its cheaper six-core Ryzen competition here despite its Turbo Boost Max 3.0 support. Score one for the red team.

Compiling code in GCC

Our resident code monkey, Bruno Ferreira, helped us put together this code-compiling test. Qtbench records the time needed to compile the Qt SDK using the GCC compilers. The number of jobs dispatched by the Qtbench script is configurable, and we set the number of threads to match the hardware thread count for each CPU.

Chalk up another good showing for the Ryzen bunch in this test. The 1600X edges past the i7-7700K, and the 1600 trails the i7-6800K by a hair’s breadth. The Ryzen 5 1500X lands in between the unlocked Core i5 competition, leaving its $190 Core i5-7500 competitor a ways back. Even with eight threads, however, the Ryzen 5 1400 can only just match the Core i5-7500.

7-Zip file compression

If you need to zip up files often, the Ryzen CPUs are hard to beat for the buck. The i7-6800K opens a hefty margin on the six-core competition in 7-Zip’s compression test, possibly thanks to its copious memory bandwidth. Both the Ryzen 5 1600 and 1600X are hanging right with the Core i7-7700K here, though, and the Ryzen 5 1500X can finally stretch all eight of its threads to nose past the unclocked Core i5s.

I decompress ZIP archives far more often than I compress them, and here, the Ryzen chips dominate. The 1600X opens up a wide margin on the Core i7-6800K, and even the Ryzen 5 1400 can blast past the quad-core Skylake and Kaby Lake chips.

VeraCrypt disk encryption

Full-disk encryption is another task amenable to multithreaded performance gains, and the Ryzen 5 parts generally leave the Intel competition in the dust in both the hardware-accelerated AES and the pure-software Twofish portions of our test. Any Ryzen CPU is a fine choice if you need to hide your files from prying eyes quickly.

 

Cinebench

The Cinebench benchmark is powered by Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. The test runs with a single thread and then with as many threads as possible.

Much like our Javascript benchmarks, the Ryzen CPUs get sandwiched by older Intel parts at the bottom and Kaby Lake at the top in Cinebench’s single-threaded test. Still, they’re hanging with CPUs we would hardly consider inferior. You could keep worse company than the Core i5-4690K, the Core i5-7500, and the Core i7-6800K.

Fire off Cinebench on every thread and there are no surprises in the results. The Ryzen 5 1600X and Ryzen 5 1600 bookend the Core i7-6800K, and the Ryzen 5 1500X outpaces every Core i5 in our test suite. Mark another win for Ryzen here.

Blender
Blender is a widely-used, open-source 3D modeling and rendering application. The app can take advantage of AVX2 instructions on compatible CPUs. We chose the “bmw27” test file from Blender’s selection of benchmark scenes to put our CPUs through their paces.

It’s not quite a win for Ryzen 5 CPUs in this test, but it’s close. The 1600 and 1600X stalk the i7-6800K and narrowly beat out the more expensive Core i7-7700K. The Ryzen 7 1500X’s eight threads narrowly beat the Core i5-7600K and its higher-throughput AVX hardware, too. The Ryzen 5 1400 even ekes out a victory over the Core i5-7500. The most notable results in this field come from the i5-2500K and i5-3570K, whose lack of AVX2 support severely hampers their standings.

Handbrake video transcoding
Handbrake is a popular video-transcoding app that recently hit version 1.0. To see how it performs on these chips, we converted a roughly two-minute 4K source file from an iPhone 6S into the legacy “iPhone and iPod touch” preset using the x264 encoder’s otherwise-default settings.

Handbrake multithreads well, and the Ryzen 5s take advantage by outpacing the similarly-priced Intel competition. The Ryzen 5 1600 matches the Core i7-6800K, and the 1600X is ever so slightly faster. Moving on.

LuxMark OpenCL performance

Because LuxMark uses OpenCL, we can use it to test both GPU and CPU performance, and to see how these different types of processors work together. We used the Intel OpenCL runtime for all of the CPUs at hand, since it delivers the best performance under LuxMark for x86 CPUs of all types in our experience.

We used the “Hotel lobby” scene for our testing, but we otherwise left LuxMark at its default settings.

In the CPU-only phase of our testing, performance scales as expected with cores and threads, save for the Core i7-6800K’s surprise win. All of the Ryzen CPUs handily beat out their comparable Intel competition here, though.

The GPU-only phase of the test tells us that the GTX 1080 in our test rig works equally well across all of our platforms. Moving on.

Putting the CPU and graphics card together shows some differences among CPUs, but the results are largely close together except at the extremes of the chart. Every CPU here gets a major helping hand by being paired with the GTX 1080.

 

Image analysis with picCOLOR

It’s been a while since we tested CPUs with picCOLOR, but we now have the latest version of this image-analysis tool in our hands courtesy of Dr. Reinert H.G. Mueller of the FIBUS research institute. This isn’t Photoshop; picCOLOR’s image analysis capabilities can be used for scientific applications like particle flow analysis. In its current form, picCOLOR supports AVX2 instructions, multi-core CPUs, and simultaneous multithreading, so it’s an ideal match for the CPUs on our bench today. Check out FIBUS’ page for more information about the institute’s work and picCOLOR.

picCOLOR’s real-world results seem to scale nearly perfectly with CPU resources, so as usual, the most cores and threads at the highest clocks win. In this case, that means the Ryzen 5 1600 and 1600X beat out the Core i7-6800K, and they’re only rivaled by the i7-7700K. The Ryzen 7 1800X blows away the rest of the field with its sixteen threads, so much so that we had to double-check its score for accuracy. The result checks out, though.

CFD performance with Euler3D

Euler3D tackles the difficult problem of simulating fluid dynamics. It tends to be very memory-bandwidth intensive. You can read more about it right here. We configured Euler3D to use every thread available from each of our CPUs.

Euler3D hungers for memory bandwidth, and only the Core i7-6800K can truly sate it. The Broadwell-E chip roughly doubles the performance of its six-core Ryzen competition. While the Ryzen 5 1600s can still put up a fair fight with the Core i5-7600K and Core i7-7700K, they simply can’t match Broadwell-E’s quad-channel memory bandwidth. Save us, Threadripper, you’re AMD’s only hope.

Digital audio workstation performance

After our trial run with DAWBench in our last CPU review, we (gasp) read the instructions for the benchmark and discovered that we had been doing it wrong (though not in a fashion that would have unduly favored one CPU over another). This time around, we’re doing it right. We’re able to run both the DAWBench DSP benchmark, which tests the number of instances of a VST plugin a CPU can handle before being overloaded, and the DAWBench VI test, which tests virtual instrument and sampler performance. We chose the most demanding versions of both the DAWBench DSP and DAWBench VI tests from the project file to put as much hurt as possible on our test systems.

We used the latest version of the Reaper DAW for Windows as the platform for our tests. To simulate a demanding workload, we tested each CPU with a 24-bit and 96 KHz sampling rate, and at two ASIO buffer depths: a punishing 64 and a slightly-less-punishing 128. We then added VSTs or notes of polyphony to each session until we started hearing popping or other audio artifacts. We used Focusrite’s Scarlett 2i2 audio interface and the latest version of the company’s own ASIO driver for monitoring purposes.

A very special thanks is in order here for Native Instruments, who kindly provided us with the Kontakt licenses necessary to run the DAWBench VI project file. We greatly appreciate NI’s support—this benchmark would not have been possible without the help of the fine folks there.

In the steady-state DSP test, the six-core Ryzen 5s slightly trail the Core i7-6800K. Excepting the Core i5-4690K’s unusually good performance (which may be a reflection of platform differences rather than pure CPU performance among Intel parts), the Ryzen 5 1500X comes out on top versus Intel’s four-core, four-thread parts. For serious power behind this type of work, however, more cores and threads really seem to help, and the Ryzen 5 1600 and 1600X both offer quite a bit of DSP prowess for not a ton of cash.

The DAWBench VI test starts each of its loops with a many-voiced instrumental stab, so performance in this test depends on how a CPU can handle a steady-state workload interrupted by a sudden, much more resource-intensive and latency-sensitive burst of work.

At the punishing buffer size of 64, the older and lower-clocked chips in our lineup can’t handle the VI test at all. The Ryzen 5 1500X redeems itself nicely over the i5-7500, and the Core i7-7700K only barely edges by the Ryzen 5 1600s. The Core i7-6800K turns in a freakishly good performance here, though, and we can only presume that’s because of its scads of memory bandwidth compared to every other chip in our lineup.

Relax the buffer size to 128, and every chip in our lineup can at least run the VI test. The 1500X still comes out atop the i5-7500, although the matchup is much closer at these more forgiving settings. The Ryzen 5 1600s just trail the i7-7700K. Meanwhile, the i7-6800K maxes out the number of voices available in the DAWBench VI test file. If you need to do this kind of work, the X99 platform simply can’t be beat.

The Ryzen 5 chips make digital audio workstation performance more accesible than similarly-priced Intel parts, and that’s a major win for AMD. However, the Core i7-6800K’s total dominance of these tests suggests the Ryzen parts could do with more memory bandwidth to play with. Bring on the Threadripper.

 

A quick look at power consumption and efficiency

A piece of the Ryzen puzzle that we’ve left unaddressed so far is power efficiency. The fact of the matter is that my testing facilities don’t have the sophisticated power-measurement equipment that we’ve enjoyed access to in the past. Still, we’ve found a way to estimate our classic task energy measurement by using a trusty Watts Up power meter and the Blender “bmw27” standard benchmark. Our observations indicate that Blender is a very steady-state workload, meaning that power consumption varies little over time. Using this knowledge and the fact that one watt correlates to one joule per second of energy expended, we can estimate the entire amount of energy expended over the course of our benchmark run.

First, let’s look at idle power consumption for each system in our test lineup. These measurements will vary with the host motherboard and the connected devices attached to a system, so we’d caution putting too much stock in them. Still, the Ryzen systems consume just a few more watts than comparable Intel chips do at idle. The Gigabyte X99-Designare EX motherboard hosting our Core i7-6800K is notorious for high power consumption at idle thanks to its PLX PCI Express multiplexer, so it should be considered an outlier.

Next, let’s look at load power results. Unsurprisingly, the Ryzen 7 1800X consumes the most power of the bunch at load, and the Ryzen 5 1600s and the Core i7-6800K aren’t far behind. The Core i5-7500, on the other hand, sips power even compared to the 65W Ryzen 5 1400 and 1500X. Just goes to show that TDP isn’t a useful way of comparing the efficiency of CPUs from different manufacturers.

Load power consumption alone doesn’t tell us anything about efficiency, though. Some chips finish work quickly despite their high load power draws, while others take much more time to complete a job despite deceptively low peak figures at load. By converting instantaneous watts into joules per second and multiplying that figure by the number of seconds taken to complete our Blender benchmark, we can arrive at an estimation of total task energy in kilojoules—emphasis on estimation.

Ideally, the standings of each chip would correlate 1:1 with the time taken to complete our selected workload. For the most part, they do. The Core i5-7500’s power-sipping ways mean it floats toward the top of the chart for efficiency, though, despite its rather lengthy time to completion. The Ryzen 5 1600 also beats out the Ryzen 5 1600X and Core i7-6800K despite finishing behind them in our time-to-completion standings. The Ryzen 5 1600X is “only” as efficient as the Ryzen 5 1400, suggesting its high clocks and relatively high 95W TDP put it at an efficiency disadvantage compared to the other six-core chips in this test despite its third-place time-to-completion performance.

Overall, though, these estimated numbers are far more heartening for AMD than the company’s past showings in our energy-efficiency tests. Every Ryzen CPU in our tests seems about as efficient as similarly-priced Intel CPUs, and that’s a huge leap forward for AMD compared to the bad old days of the FX-8350 and friends. Builders won’t have to trade power efficiency for performance when they choose a Ryzen CPU to power their systems.

 

Conclusions

It’s time once more to sum up our results using our famous scatter plots. To spit out this final index, we take the geometric mean of each chip’s results in our real-world productivity tests, then plot that number against retail pricing gleaned from Newegg. Where Newegg prices aren’t available, we use a chip’s original suggested price.

When we first looked at the performance of AMD’s Ryzen 5 processors, I had a feeling that big changes were on the horizon for the midrange CPU market. Our productivity tests prove it. The $250 Ryzen 5 1600X is actually a hair faster than Intel’s $340 Core i7-7700K in most of our real-world testing, and it walks all over the $240 Core i5-7600K in all but a few lightly-threaded tests and synthetic benchmarks. The i5-7600K boasts much higher single-threaded performance than the Ryzen part, but that advantage only gives the Intel chip an edge in some games and for general desktop usage. Even so, one can hardly call the 1600X a pokey processor in those tasks thanks to its high clocks and XFR boost. Given its all-around competence, I think the Ryzen 5 1600X is the chip to get for $250 right now.

AMD Ryzen 5 1600

AMD Ryzen 5 1600X

June 2017

Wait up, though. In a happy development for builders, it’s not necessary to spend $250 plus the cost of a cooler to get a powerful midrange CPU any longer. The Ryzen 5 1600 barely trails both the i7-7700K and the 1600X in our productivity tests, and it includes a hefty Wraith Spire cooler in the box. For just $220 out the door, you can grab an enviable amount of multithreaded performance for just a smidge more than what you might have paid for a locked Intel quad-core in the past. Can you say value?

Neither Ryzen six-core part can catch the Core i7-6800K in our final standings, but the hexa-core Broadwell-E’s advantages only come into play where memory bandwidth or floating-point prowess is a concern. When it can tap those advantages over AMD’s chips, the i7-6800K can totally lap the field. Even so, our final reckoning puts the i7-6800K just 11% ahead of the Ryzen 5 1600X, and the Intel chip is a whopping 52% to 76% more expensive than the AMD part depending on the discount winds. Those prices are quite hard to swallow given the virtues of the Ryzen 5 1600-series. It’s no wonder Intel is pricing its Skylake-X chips more in line with historical trends.

For quad-core CPUs, the $190 Ryzen 5 1500X is the clear choice over any four-core Intel part in its price range. The evergreen Core i5-7500 can hang with the 1500X in some of our tests, but the Ryzen chip’s eight threads give it enough of an advantage in a wide enough range of work that we’d heartily recommend the AMD chip instead. The i5-7500’s integrated graphics processor will still be of interest to some PC builders, but enthusiasts and gamers will likely be adding a discrete graphics card to their parts lists anyway. That said, $30 more buys you a lot more CPU in the Ryzen 5 1600, so we’d save up and get the six-core chip if at all possible.

The Ryzen 5 1400 isn’t worth the $20 one would save over the 1500X. Even with four cores and eight threads, the 1400 plods behind the i5-7500 in our tests, and it’s well outpaced by its faster quad-core sibling. That $20 savings also requires one to give up half of the L3 cache on board the 1500X, as well. Given those caveats, we’d strongly recommend spending the extra money on the 1500X.

All told, the shortcomings of AMD’s Ryzen CPUs mostly fade away when one isn’t paying $300 or more for a chip. The Ryzen 5 1600X delivers Ryzen 7 1800X-class gaming performance for half the money, and its productivity performance is unprecedented for its price. The Ryzen 5 1600 and Ryzen 5 1500X also bring new levels of multithreaded performance to their price points. PC builders have real choice again in sub-$300 CPUs with the Ryzen 5 family, and for those who lean hard on their systems, our tests show their choice should be AMD.

Comments closed
    • albundy
    • 2 years ago

    what i love:
    -real world performance – rendering, compression, encryption, and most important to me, encoding.
    -65w R7 1700 overclocks like a beast, and shows decent performance gains.

    what i don’t like:
    the entire motherboard line up sucks. first the releases were botched. there was hardly any stock. the disappointments didn’t stop there. it was hard finding a board with usb 3.1 gen2 connectors. i expected 2nd revisions at computex, but nothing was mentioned.
    -ram was a hit or miss…mostly miss if you went over 2133mhz. even now, 6 AGESA bios updates later, and still no full ram compatibility. We will be entering month 5 soon. this is despicable and embarrassing to say the least.
    -no more e-sata. what genius decided to cut corners again? yet we still have pci slots, usb 2.0 ports, com ports and PS2 connectors on the new boards.

      • raddude9
      • 2 years ago
    • Ushio01
    • 2 years ago

    AMD has caught up to Intel so well down AMD but it’s too late.

    Good enough CPU performance was reached years ago it’s not the 90’s and early 2000’s when upgrading regularly made a noticeable difference anymore.

    Has AMD released any Ryzen chips that are winning notebook orders yet? What about in the server space? those are the markets AMD needs.

      • pogsnet1
      • 2 years ago

      It’s not too late, many people still buy, new generation teens or adults will buy what is worth for the money. Like me I will buy ryzen 5

    • Arbie
    • 2 years ago

    To those actually about to buy: consider the bigger picture. It’s amazing that AMD could come from so far behind in this high-stakes game, and literally force Intel’s hand on several fronts. They and their investors may not be able to do that twice. Look how Intel has behaved without competition – do you want that permanently? Who would you rather reward with your money? Now is the time to go AMD, even if your case isn’t one of the 80% where they offer the best value now anyway.

    I had been buying Intel but, seeing this bigger picture, I splurged on an 1800X. When big Vega comes out I’ll go for that too. Thank you, AMD. (And no, I don’t own their stock).

    • gamoniac
    • 2 years ago

    A bit late but great review. Thank you.

    Nitpicking here, but the summary paragraph sounds a bit confusing to me. It almost sounds negative:
    [quote<]All told, the shortcomings of AMD's Ryzen CPUs mostly fade away when one isn't paying $300 or more for a chip.[/quote<] It would be more clear to say "All told, AMD's Ryzen CPUs provide great value below $300 budget"? Unless I have misunderstood the intent.

    • AMDisDEC
    • 2 years ago

    Great review!
    I am looking forward to TR Threadripper review, when NDA is lifted.

    You can thank all this Zen CPU performance and solid marketing to the greatest CEO AMD has had since Jerry Sanders, Dr. Lisa Su.

    Dr. Lisa Su, not to be confused with other female CEO’s such as, Cara Carleton Fiorina.
    Lisa Su is like the female version of Jensen Huang.
    A MIT graduate engineer with her background in Semiconductors. She knows what’s happening, and should be happening in the chip, and she knows what has to happen to pull AMD out of the ditch former CEOs ran the company head first into.

    The lady is an Engineering and Business Genius.

    She runs a tighter ship than Hector Ruiz, who allowed AMD Engineering, Marketing and Acquisitions to ruin all of the momentum built on the success of The Opteron processor.
    Dr. Su won’t allow that level of incompetence under her Zen-like carefully planned watch.

      • chuckula
      • 2 years ago

      When I’m with you Lisa Su.
      I go out of my head.
      I just can’t get enough
      I just can’t get enough
      [url=https://youtu.be/foDox5lshSs?t=24s<]I just can't seem to get enough of RyZen[/url<]

    • juampa_valve_rde
    • 2 years ago

    Thanks for the complete review Jeff. Good stuff.

    I’m seeing a benigne view over ryzen beyond their good qualities, that AMD found a strickingly good and well rounded design, then executed properly, but it’s not all sparks and magic, they choose to not include an IGP (that counts for half a die on intel quad cores) and put instead moaar cores and cache, but solid performing cores (not gimpy halfs like bulldozer) and they nailed it, because people looking for badass game and work performance mostly dont give a damn about IGP. Besides that, the layout was well thought, being able to put a bunch of this dies together and have a badass server or workstation chip. Kudos to AMD for coming back to the ring with a proper fighter, now i wanna buy one.

    • helix
    • 2 years ago

    As this is such a new platform/architecture I think this review really needed its part II. This is not just “sligtly higher IPC over last year + some odd new special purpose instruction”.

    Thanks for putting in the time to get this done!

    • synthtel2
    • 2 years ago

    A great review, thanks!

    I am curious about a technical point – you equalized most RAM specs, but the Sky/Kaby Lake parts are at 2T where all the other rigs are at 1T. Did they have trouble reaching 1T, or is that a typo, or…. ?

      • Shobai
      • 2 years ago

      I typed this question up twice myself, but bailed before posting on both occasions. Jeff did a great job at matching latencies across the configurations: they’re all in the 9-10ns range apart from the i5-7500 at a little over 14ns [which I understand is using Intel supplied RAM].

      I’m keen to hear from Jeff, although I can’t imagine it would make a huge difference one way or the other. From what little I could find, back in the good old days of DDR overclocking, 1T was good for between 2% and 3% better performance over 2T. I don’t know how it affects more modern RAM.

    • HERETIC
    • 2 years ago

    Nailed it Jeff.
    From as far back as Sandy the i5K has held the sweet spot.
    6 core Ryzen now owns that spot…………………………………

    Really don’t know what market the 4 core is aimed at,
    can say the same for all i3’s as well-The HT Pentium owns
    the budget end,and when you look at total system cost,the
    jump to 6 core Ryzen isn’t huge-Making everything in-between
    almost pointless……………………………..

    I do think AMD are missing out by not trying to implement some
    form of chipset integrated graphics on some MB.
    How that could be done without a NB no idea………………….

      • helix
      • 2 years ago

      They will likely launch an APU with integrated graphics after the summer.

        • JustAnEngineer
        • 2 years ago

        I’d expect that the first APUs will be aimed at the lucrative laptop market. The low-margin desktop variants may be a lower priority.

      • semitope
      • 2 years ago

      they have options on that. they really could include a GPU on the motherboard at a premium, or bundle a really low end GPU with motherboards/CPUs. they can design a tiny GPU for this purpose only. if it makes sense to them anyway. just needs to be able to drive a monitor up to 4k or more.

    • odizzido
    • 2 years ago

    1600x is the CPU I would get if I were buying one today.

    • sdch
    • 2 years ago

    [quote<]Intel "Core i7-6800K" (Core i7-6950X with four cores disabled)[/quote<] Did you adjust the base/turbo speeds as well as the cache? I was under the impression that there's more to the two CPUs besides core count. Forgive me if I've overlooked something. Thanks!

      • Jeff Kampman
      • 2 years ago

      Of course.

        • chuckula
        • 2 years ago

        A horse is a horse?

          • Mr.Ed
          • 2 years ago

          You know I’m famous, right?

            • chuckula
            • 2 years ago

            YOU WIN.

            • derFunkenstein
            • 2 years ago

            Best joke account ever.

    • Demetri
    • 2 years ago

    Amazon has the 1600X for $230 today, and the 1600 for $210.

    [url<]https://www.amazon.com/AMD-Ryzen-1600X-Processor-YD160XBCAEWOF/dp/B06XKWT7GD/ref=sr_1_18?ie=UTF8&qid=1496684014&sr=8-18[/url<] Also you can get a new 1700X on Ebay for $297 with promo code "PJUNESAVINGS10" [url<]http://www.ebay.com/itm/New-AMD-RYZEN-7-1700X-8-Core-3-4-GHz-AM4-Skt-95W-YD170XBCAEWOF-Desktop-CPU-/262900101141[/url<] Same seller also has the 1400 for $140, and 1500X for $170 after promo code.

    • sophisticles
    • 2 years ago

    If I may offer my own conclusions:

    1) You can throw away the Blender benchmark results because they are meaningless; anyone that uses Blender will not bother rendering via cpu, any cpu, they will just use gpu acceleration and smoke those rendering benchmarks.

    2) You can throw away the Handbrake encoding benchmarks as well. Few still use software based encoding, most have switched to wither NVIDIA’s excellent NVENC, which I am currently using to encode 1090p content at nearly 400 fps and after hundreds of encodes I find it approaches x264+medium quality OR they are using Intel’s Quick Sync, which according to MSU’s 2016 codec shootout is capable of beating, quality-wise, x264+placebo AND x265+very slow, Kaby Lake adds hardware VP9 encoding, and there is no software based VP9 encoder that comes close to encoding in real time.

    3) The GCC compilation benchmarks are unreliable, it’s all over the open source community, Ryzen cpu’s have a bug in them that causes systems to crash during long compilation runs:

    [url<]http://phoronix.com/scan.php?page=news_item&px=Ryzen-Compiler-Issues[/url<] Then there's this reality, let's assume the above 3 points did not exist, one would still be foolish to build a system based on either current AMD or current Intel Processors, because in a couple of months Coffee Lake will be out and a potential builder owes it to himself to see what those new Intel processors will offer. If TR wants to do an interesting review, how about doing an encoding shootout, comparing the GTX1050, the cheapest AMD RX they can get and a Kaby Lake Pentium, via FFMPEG on Linux (Ubuntu is fine). If need be I would be glad to offer pointers on accessing the hardware encoders via nvenc and vaapi.

      • chuckula
      • 2 years ago

      [quote<]1) You can throw away the Blender benchmark results because they are meaningless; anyone that uses Blender will not bother rendering via cpu, any cpu, they will just use gpu acceleration and smoke those rendering benchmarks.[/quote<] That's actually not true. For lower-quality "live" rendering while you are working you are right that the GPU is quite important. However, for higher-quality rendering the CPU is still not only heavily used but can even be faster than the GPU (believe it or not!)

        • RAGEPRO
        • 2 years ago

        He’s also wrong about the GPU-based encoding. Nobody doing serious video encoding uses hardware accelerators for final encodes. Even for live-streaming, the OBS crew recommends CPU-encoding because it simply has the best quality-per-bitrate at low bitrates.

          • tipoo
          • 2 years ago

          By some definitions of serious. The kids becoming youtube millionaires definitely use hardware accelerated encode (get stuff out as fast as possible), while traditional “pro” media for high end TV/movies use CPU as you said because the quality. Over internet compressed video though it doesn’t matter.

            • RAGEPRO
            • 2 years ago

            [quote<]The kids becoming youtube millionaires definitely use hardware accelerated encode (get stuff out as fast as possible)[/quote<]Believe it or not, most of them don't. The vogue is to have a second PC explicitly for transcoding, usually with an LGA 2011[v3] CPU.

          • Voldenuit
          • 2 years ago

          I have a friend who streams, and OBS (CPU) simply tanks his framerate; these days he switches between OBS or Shadowplay based on how seriously he’s gaming at any given time.

            • derFunkenstein
            • 2 years ago

            That’s why Elgato’s capture cards like the [url=https://www.elgato.com/en/gaming/game-capture-hd60<]HD60[/url<] and second PCs to encode/stream the video are all the rage among serious (i.e. those who make it a living) streamers.

        • cegras
        • 2 years ago

        > That’s actually not true.

        Neither is what you said. OBS recommends CPU rendering and avoiding quick sync or the GPU. The quality dies as soon as the screen changes a lot from frame to frame – i.e. any video game.

          • chuckula
          • 2 years ago

          Instead of posting blind-hatred personal attacks against me just because I don’t bow deeply enough to your preferred hardware vendor, why don’t you try reading what I actually wrote.

          Like: I never said anything whatsoever about game streaming in my post. At all.

          But that sure as hell didn’t stop you from literally inventing a strawman from thin air based on literally nothing so that you could attack me personally with zero facts, now did it?

          Let’s play your game the way you like it to be played hmm??

          You’re flat wrong in alleging that Maxwell GPUs implement DX12 asynchronous computing. What’s wrong with you Nvidia fanboy?

            • SlappedSilly
            • 2 years ago

            [quote<]But that sure as hell didn't stop you from literally inventing a strawman from thin air based on literally nothing so that you could attack me personally with zero facts, now did it?[/quote<] Funny you should say that. Let's look at cegras post. [quote="cegras"<]Neither is what you said.[/quote<] Is that a personal attack? Nope. Even if you construe it as an attack at all, it's an attack on what you said, not you personally. [quote="cegras"<]OBS recommends CPU rendering and avoiding quick sync or the GPU.[/quote<] Personal attack? Nope. [quote="cegras"<]The quality dies as soon as the screen changes a lot from frame to frame - i.e. any video game.[/quote<] Personal attack? Nope. Sorry, I failed to find a personal attack. Now let's look at what you say. [quote="chuckula"<]Instead of posting blind-hatred personal attacks against me just because I don't bow deeply enough to your preferred hardware vendor ...[/quote<] Straw man? Yup. [quote="chuckula"<]What's wrong with you Nvidia fanboy?[/quote<] Personal attack? Yup.

            • chuckula
            • 2 years ago

            Blah blah blah.

            When 90% of cegras’s posts are personal attacks on me (he has a long history) and the other 10% are basically irrelevant, I really don’t care that somebody gets upset when I respond after an unprovoked attack that is literally based on cegras’s wishes of what I said so that he could pretend to be some genius in his Aaron-Sorkin fantasy world.

            As Captain Sheridan said, I don’t start fights but I finish them.

      • raddude9
      • 2 years ago
        • thecoldanddarkone
        • 2 years ago

        Source for your coffee lake rumor.

        Found, ingnore.

          • raddude9
          • 2 years ago
      • semitope
      • 2 years ago

      [quote<]3) The GCC compilation benchmarks are unreliable, it's all over the open source community, Ryzen cpu's have a bug in them that causes systems to crash during long compilation runs:[/quote<] iirc this was solved

        • MathMan
        • 2 years ago

        Not at all.

        [url<]https://community.amd.com/message/2796982[/url<] AMD is suggesting on their forums to disable certain features, but they don't seem to conclusively fix things. And BSD has a patch that tries to work around the issue. Something about an IRET instruction going bad.

      • f0d
      • 2 years ago

      err nope gpu/quicksync encoding in handbrake still blows compared to pure cpu encoding
      you either get a bigger file or a lower quality encode on gpu compared to cpu
      i have tested with gtx970 as well as a 7600k

      edit: were these tests based solely on psnr/ssim? i hope not as they are rough indicators of visual quality at best – also how much faster was quicksync than an 8 core cpu?

      • just brew it!
      • 2 years ago

      How does the fact that a few people seem to be having issues with gcc [i<]crashing[/i<] invalidate the benchmark?

        • sophisticles
        • 2 years ago

        I am going to address everyone’s comments in this reply, starting with yours.

        The benchmark is invalidated not from a performance standpoint but from a practical use standpoint. The benchmark is testing performance and the relative results are valid BUT if a test subject has a bug that doesn’t allow it to be used for any real compilation work then the benchmark becomes meaningless. In other words, yes if all you’re going to do is run a short compilation of a couple of minutes then sure the results are valid. But if you are building a distro from source, like a LFS or Gentoo or you’re a developer getting ready to release a new version of your distro and half way through building the final iso your pc crashes due to a cpu bug, then the benchmarks don’t amount to a hill of beans, now do they?

        Regarding Blender, I have no idea where anyone got the idea that gpu rendering was only for previewing but that for final rendering the cpu is somehow faster and higher quality. This is just laughable and shows that this person has never used Blender in any serious capacity.

        Regarding the encoding, I have tested NVENC extensively and posted comparisons over at the videohelp forums (and getting reading to post a fresh set soon) and it matches x264+medium except it encodes at about 12x real time for 1080p; Sky Lake’s QS was tested by the respected Moscow State University in their last codec comparison and beat x264+placebo and x265+ very slow in both objective and subjective testing.

        To the poster that said he had tried QS via handbrake and it was awful, I agree, QS via handbrake is awful but that’s because handbrake is a poor excuse for a piece of software.

        If you must use handbrake to test quick sync, because you’re not comfortable with cli encoders at least look through the docs and learn about the advanced options that greatly improve quality:

        [url<]https://handbrake.fr/docs/en/1.0.0/technical/video-qsv-options.html[/url<] No serious encoder uses HB, they either use ffmpeg via cli or x264/x265 via cli. Handbrake has a very well known limitation that its developers admit right on their website: [url<]https://handbrake.fr/news.php?article=37[/url<] "video pipeline is still 8-bit 4:2:0" So if you have a 4:4:4 16 bit source and feed it to handbrake, it will step it down 4:2:0 8 bit before spitting out a 4:2:0 10 bit or 12 bit encode; this is bush league, only someone that doesn't know anything about video would actually use a piece of software like this. Seriously, anytime I see a review, and this is nearly all the reviews I see on most tech sites, where the encoding tests are done via handbrake, cyberlink, tmpg, or the canned Tech ARP x264 benchmark, I immediately know that these people have no idea what they are doing with regards to video and I just ignore the results.

          • synthtel2
          • 2 years ago

          Even if AMD doesn’t get around to fixing that compilation bug in a timely manner, it seems to mainly be an issue with older versions of GCC. Clang seems to be unaffected, and newer versions of GCC seem to be a lot better or completely immune (when I read up on this, nobody had figured out that kind of detail yet). The first clue that compilation isn’t a lost cause of a workload should be that Zen is so great at it, yet this issue was only even discovered quite recently.

            • just brew it!
            • 2 years ago

            The gcc compiler suite is also one of the most widely used in the world (probably in the top three). The fact that this has not been more widely reported implies that it is only affecting a small percentage of systems.

            It is definitely a cause for concern, but if it is a CPU bug I am confident a fix will be found. With the heavy use of microcode in modern CPUs, it really should be addressable via a microcode patch.

          • just brew it!
          • 2 years ago

          You are making a big assumption — namely, that whatever is causing the gcc crash will not be fixed.

          • Flying Fox
          • 2 years ago

          [quote<]Seriously, anytime I see a review, and this is nearly all the reviews I see on most tech sites, where the encoding tests are done via handbrake, cyberlink, tmpg, or the canned Tech ARP x264 benchmark, I immediately know that these people have no idea what they are doing with regards to video and I just ignore the results.[/quote<] I believe TR always welcomes constructive feedback. With your claimed expertise, can you direct us to some of your threads, or better yet, suggest which tool(s) to use, and perhaps add how to effectively test with these tools while maintaining said quality levels? HB is just too easy, so a lot of people may just not know anything better. Of course, you can argue that since HB is what most people use the benchmark still has some meaning to them.

            • sophisticles
            • 2 years ago

            Re: Video testing.

            If you’re going to test cpu encoding performance, do not use a source that is in a delivery format, always use a source that is either uncompressed, losslessly compressed or in a intermediate mastering format.

            Good test sources include the massive 186Gb y4m Tears of Steel movie, NetFlix’s 90Gb Meridian movie, which was created by NetFlix with the purpose of being a torture test for encoders, editors, hardware and work flow.

            Another good source for test material is [url<]https://www.harmonicinc.com/free-4k-demo-footage/,[/url<] you need to register a free email address and you can download high quality 4k ProRes that run about 13Gb for 45 seconds, excellent test footage. If you want to do a pure software based test encode, I recommend using Ubuntu, grab the latest ffmpeg git, build it with support for libx264 and libx265: extract the tar ball, open a terminal in that directory, type: ./configure --enable-nonfree --enable-gpl --enable-libx265 --enable-libx264 --enable-opencl --enable-nvenc --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvpx --enable-cuda --enable-cuvid --enable-libnpp --extra-cflags=-I/usr/local/cuda/include --extra-ldflags=-L/usr/local/cuda/lib6 then type make && sudo make install You will need the latest nvidia and cuda tool kit installed, if you don't care about cuda or nvdia's accelerated scaling, just eliminate those options. Once everything is built, from a terminal just type: time ffmpeg -i source -c:v libx264 -preset medium -crf 18 -an output.mp4 This will produce an mp4, sans audio, you can view the instantaneous encoding speed in the terminal and after it's done the "time" function will output in the terminal how long it took to complete the encode, which you can use to calculate the encoding speed. You can also the -preset medium part since x264 defaults to medium anyway. If you want to test NVENC via ffmpeg+nvenc or Quick Sync via libmfx or vaapi, PM me and I'll walk you through the steps.

            • Waco
            • 2 years ago

            Or, you know, they could just use Handbrake since it’s a comparison of a video encoding workload that doesn’t change significantly whether you use ffmpeg or Handbrake.

            What would they stand to gain by using a completely different toolchain on a different OS besides a bit of credibility in the eyes of pedants?

            • sophisticles
            • 2 years ago

            The problem with the way they, and everyone else uses Handbrake and the sources they use is that they give a false representation of cpu performance. With delivery formats as source, such as a Blu-Ray, you have a decoding bottleneck and when sites add resizing, rather than increasing the workload they actually increase the bottleneck. Using ffmpeg and nvenc as an example, if I use a VC-1 1080p source as test material and software decoding, I am limited to about 110 fps encoding speed, if I add nvdec to the mix (for hardware decoding) encoding speed skyrockets to nearly 400fps.

            With software such as Handbrake, aside from the fact that it’s not used in any serious production environment, the cpu results are skewed due to a software bottleneck that is impossible to alleviate

            At the very least, if they insist on using Handbrake, they should use an uncompressed or lossless or intermediate format and do not resize and increase the preset used to very slow which is what most serious encoders use when they use x264 for archival or delivery purposes.

            There current tests are basically worthless in providing an end user a proper picture to help them decide what to buy.

            • Waco
            • 2 years ago

            …except it is being used here exactly how essentially all consumers use it, and faster CPUs translate to faster decode/encode performance.

            If you say “everyone is doing it wrong” that’s nice, but useless, because that’s how they use it. Myself included.

            • Redocbew
            • 2 years ago

            Not to mention it’d be pants on head stupid to create a video compression algorithm that made decoding hard enough to create some “bottleneck”.

            Of course, encoding doesn’t necessarily imply compression, but as you said that’s almost always the way it is anyway.

            • rechicero
            • 2 years ago

            Linux marketshare is 5%. You should offer a benchmark that is useful for the other 95% of people. If Handbrake is so bad, which are the free alternatives for Windows-MacOS?

          • ptsant
          • 2 years ago

          Ryzen does NOT support the same instructions as a Bulldozer. It takes a careful choice of flags to produce something that runs fast and does not contain illegal instructions. In the beginning I messed up and even some of my own software would not run.

          Now, I have compiled the linux kernel (twice), the whole Caffe AI infrastructure and several other packages for my Ryzen 1700X without difficulty. I’m almost certain that if people have issues, is because they don’t have a correct binary. I really woudn’t blame this on the CPU without clear proof.

            • MathMan
            • 2 years ago

            There’s this:
            [url<]http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/b48dd28447fc8ef62fbc963accd301557fd9ac20[/url<] This has nothing to do with wrong compilation flags, especially since it has been observed for other task as well.

          • joselillo_25
          • 2 years ago

          “””Regarding Blender, I have no idea where anyone got the idea that gpu rendering was only for previewing but that for final rendering the cpu is somehow faster and higher quality. This is just laughable and shows that this person has never used Blender in any serious capacity.””

          mmm have you ever work in the 3d industry? a 3d software does not work like you are saying, the GPU render has no use besides the viewport, where the artist manipulates poligons and textures.

          thats why AMD is creating a GPU COMPUTATIONAL render this year to use the GPU to Render the same way that the CPU does..

          you are an ignorant.

            • sophisticles
            • 2 years ago

            Have you ever worked in the 3d industry? I know you’ll claim you have but I also know that you don’t. If, as you and the other poster claimed, no one used gpu acceleration in final render, then the option for gpu accelerated final rendering a) wouldn’t exist in any software and b) wouldn’t be considered a must have feature in every 3d rendering software.

            I would like you, or anyone else, to prove that software based final rendering is both faster and higher quality than gpu based final rendering, using Blender and any test file, feel free to do a software based and gpu based final render, using identical settings and show me that the cpu render was both faster and higher quality.

            And for both of your information, render farms that are used in 3d production, are always gpu accelerated.

            • joselillo_25
            • 2 years ago

            “”””””””””””Have you ever worked in the 3d industry? I know you’ll claim you have but I also know that you don’t. If, as you and the other poster claimed, no one used gpu acceleration in final render, then the option for gpu accelerated final rendering a) wouldn’t exist in any software and b) wouldn’t be considered a must have feature in every 3d rendering software””””””””””””

            there are a lot of 3d software that do not have GPU render option, cinema 4d for example, wich is probably the most used in the world. The GPU assisted renders started to be added last year, VRAY using CUDA for example, a lot of this software is still in beta and begin developed, like AMD GPU render

            just download a project in blender and render it using the GPU and the CPU to see it for yourself.

            render farm are not 3d acelerated, in fact the usually used xeon with no GPU :-)) is just in the last year when they have been offering more and more GPU services.

            you can read this article:

            [url<]http://www.cgsociety.org/news/article/2180/five-reasons-to-adopt-gpu-rendering-in-2016[/url<] you can see there the state of 3d rendering in 2016, few months ago, and a guy telling the profesionals that use CPU rendering why to move to the GPU rendering BECAUSE EVERYONE IS USING CPU RENDERS also there are options that are not supported in GPU rendering, your problem is you are confusing Open GL and videogame render with raytracing and the ways a image is render in a 3D software. you do not understand what you are talking about.

            • sophisticles
            • 2 years ago

            I like people like you; they disagree with someone on a subject they know nothing about, they then post a link to an article that proves exactly what the other person was saying, here are some quotes directly from the article you linked to:

            [quote<] "GPUs paired with a technology like Redshift's can render images significantly faster compared to CPU renderers," he explains. "Faster rendering means lower hardware costs, especially considering that a single GPU equals the rendering performance of five-to-20 high-end CPUs on average. “Faster rendering also means artists are able to quickly iterate on their work and without having to wait for hours to visualize the effect of a geometry, light or shader modification. This has a positive effect on their creativity and, subsequently, on the quality of the final result." "GPU rendering allows individual artists to produce higher-quality work," he explains. "Hard-to-render effects like raytraced depth of field, motion blur, glossy reflections and global illumination can now be easily employed in 3D scenes – and that includes animations. “An individual artist can install four high-end GPUs onto a single computer and rival the performance of several tens of high-end CPU-based render farm machines. And you get all of this with lower electricity costs and a fraction of the occupied physical space compared to what an equivalent CPU render farm would require."[/quote<] The entire article you linked to is basically word for word what I said, gpu's are superior for rendering than cpu's, much faster amd higher quality. The sad thing is you actually took the time to edit your response 3 times just to prove you don't know what you are talking about. Hang your head in shame.

            • credible
            • 2 years ago

            Not sure why the downvotes are there but kudos.

            • joselillo_25
            • 2 years ago

            what you have said is that:

            “”1) You can throw away the Blender benchmark results because they are meaningless; anyone that uses Blender will not bother rendering via cpu, any cpu, they will just use gpu acceleration and smoke those rendering benchmarks.”””

            because you do not know how a 3d software works and have never used one. for example you still cannot use sss, smoke, gpu render is ram limited, no physics etc.. etc… is not just use “gpu acceleration” flag like in a video player.

            and you will be surprised that in some cases CPU 3d render is still faster than gpu 3d render, but to know that you need to know how to use a 3d software and the huge amount of different scenes and tools you need to create one scene.

            as I have said you, a lot of big names in the industry, like cinema 4d for example, doesnt have a GPU render option yet.

      • Waco
      • 2 years ago

      Clearly we just need to compile faster, then the bug, if it exists, can’t show up. 😛

      Best CPU for the job? Ryzen!

        • Redocbew
        • 2 years ago

        The only alternative would be to try to get the bugs out before compiling the software, but seriously, who does that?

      • joselillo_25
      • 2 years ago

      [quote<]1) You can throw away the Blender benchmark results because they are meaningless; anyone that uses Blender will not bother rendering via cpu, any cpu, they will just use gpu acceleration and smoke those rendering benchmarks.[/quote<] you are terribly wrong about this, GPU rendering is not used in 3d industry, is used only in the viewport of the software, not the final rendering.

      • Manabu
      • 2 years ago

      According to MSU’s 2016 codec shootout x264 is almost half the bitrate of Quick Sync at the same quality. I don’t know what you are smoking.

    • DPete27
    • 2 years ago

    As primarily a gamer, it seems as though I’m not the demographic for this article.

      • morphine
      • 2 years ago

      Perhaps you’d be more interested in our [url=https://techreport.com/review/31724/amd-ryzen-5-1600x-and-ryzen-5-1500x-cpus-reviewed-part-one<]Ryzen 5 gaming review[/url<], published nearly two months ago.

        • DPete27
        • 2 years ago

        What!?! You mean there was a part 1!?!! My bad, I completely forgot about that article.

        Monday….

          • morphine
          • 2 years ago

          Heh, no worries. Happens. This just means you have to read TR more often. Like, multiple times a day 😉

            • DancinJack
            • 2 years ago

            Or, read it at all? The title of this article says part two, and the very first sentence states what happened with the gaming part of the review, and even links to it!

            • morphine
            • 2 years ago

            Peace, brah!

            • DPete27
            • 2 years ago

            I do that already. More likely that I’ve overwhelmed my memory with too many TR articles.

      • EzioAs
      • 2 years ago

      True, can’t argue with that but for folks who do every kind of work/entertainment on their main PC, the Ryzen 5 seems to present great value.

      • RAGEPRO
      • 2 years ago

      I upvoted you since you are correct. You are not the demographic for this article.

    • derFunkenstein
    • 2 years ago

    Wow, Native Instruments really did come through for you. Just about everybody who even tinkers around with audio production at least has a Kontakt Player or a Komplete Players license laying around (since they’re free and have a tiny 1GB library compared to the full version’s 45GB or so). I really dig that this benchmark is included.

      • morphine
      • 2 years ago

      According to Jeff, Native Instruments was really cool about just handing us Kontakt licenses for the purposes of benchmarking. Kudos to them.

        • derFunkenstein
        • 2 years ago

        Of all the music software crap I own (which includes the full versions of both Kontakt and Guitar Rig), they also have the most permissive licenses when it comes to installing on both a desktop and notebook at the same time, too. Just be sure to use only one at a time and they’re totally cool with it, and it basically says as much in the EULA. Nobody does licensing better.

          • RAGEPRO
          • 2 years ago

          I mean, some of us might prefer a GNU-friendly license, but I certainly can’t complain about Native’s compared to other professional software.

            • derFunkenstein
            • 2 years ago

            Obvi I meant commercial, closed-source licensing. :p

            Edit: Even Cakewalk (who up through Sonar X3 didn’t even require an online activation) has changed its tune (ha!) over the last few years, since getting bought out by Gibson. I didn’t bother getting on the Sonar yearly subscription bandwagon. X3 works just fine for me.

            • sreams
            • 2 years ago

            Although… Now it is possible to end up with lifetime upgrades for Sonar for not much $. I used to spend $100/year for Sonar upgrades. The last time I spent $200 and I never have to pay again.

            • derFunkenstein
            • 2 years ago

            Unfortunately they’re not doing upgrades from X3 and older anymore. I got some nag emails and I was waiting for a lifetime update deal, but it never came. :-/

            • smilingcrow
            • 2 years ago

            They ran the lifetime upgrades promotion twice for extended periods and also made it clear they planned on dropping upgrades for older versions. This was in their monthly newsletter.

            You can still buy some upgrades from stores so I presume you can still activate them.

            • derFunkenstein
            • 2 years ago

            I know they had those lifetime promotions, but I didn’t jump in on either. Once they gave the firm cutoff date they quit offering the lifetime stuff. I wasn’t going to buy for a year, either. X3 is fine.

        • smilingcrow
        • 2 years ago

        They will likely hand over a NFR license, Not For Resale, which precludes the license being transferred to another NI account. So little downside for them and free publicity so a good marketing decision.

      • smilingcrow
      • 2 years ago

      The full Kontakt library is a 23GB download but Kontakt is more about the 3rd party libraries and the facilitates than the library I’d say.
      Remember that many 3rd party libraries require the full version not the free Player as that gives them full integration with Kontrol/Maschine etc which is a big part of the infrastructure of NI in terms of workflow.

      NI have a 50% off sale on upgrades/updates etc right now and buy the boxed Komplete versions as they are the same price and come with a 1TB USB 3.0 HDD. Well the Ultimate versions do but maybe regular Komplete is 500GB!

        • derFunkenstein
        • 2 years ago

        That’s the download size but it’s up around 45GB on my hard drive.

        Guess I forgot that that Player isn’t enough for some libraries. Forgive me since I have the full one. :p

          • smilingcrow
          • 2 years ago

          Mine’s 23GB installed so maybe you have some 3rd party libraries in the same folder?
          Mine’s a fresh install on a new PC of Komplete 10 Ultimate but I haven’t installed all my 3rd party libraries yet.

    • Kretschmer
    • 2 years ago

    I question how multi-threaded your “real-world” benchmarks are. People spend much more time in their browsers and MS Office than in highly-threaded applications like Cinebench or picCOLOR.

      • flip-mode
      • 2 years ago

      If you’re just a web/office user then your still doing quite fine with a Core 2 Duo or a Phenom II.

        • shank15217
        • 2 years ago

        Core 2 Duo / Phenom really doesn’t cut it for Office 2016 or Visio 2016. Even office apps need decent CPUs when working with large complex documents and illustrations.

          • raddude9
          • 2 years ago
        • BobbinThreadbare
        • 2 years ago

        Modern javascript brings a Core 2 Duo to it’s knees.

        Seriously try to use Facebook, Gmail or Google Docs with one.

          • semitope
          • 2 years ago

          highly unlikely that a core 2 duo or phenom could not handle facebook, gmail, google docs etc. My phone handles that fine in desktop mode.

            • BobbinThreadbare
            • 2 years ago

            I use a core 2 duo every day and it sucks.

            • chuckula
            • 2 years ago

            Rip its threads out!

            • RAGEPRO
            • 2 years ago

            Probably due either to insufficient RAM combined with hard-drive-only storage, or due to some security package or other software on the machine. I use a PC with an [s<]Athlon 5150[/s<] [i<][sorry, just checked, it's a 5350][/i<] (quad-core Jaguar [s<]1.6[/s<] 2 GHz) to do most of my work for TR and it has no issues. Four cores, sure, but [s<]1.6[/s<] 2 GHz Jaguar is [i<][still][/i<] much slower than the majority of Core 2 Duos. It also only has single-channel memory, and shares it with the integrated graphics.

      • derFunkenstein
      • 2 years ago

      Why do you question? Can you not read? There is no question. Those benchmarks are mostly very multithreaded. The web stuff is the big exception.

        • Mr Bill
        • 2 years ago

        Considering the barrage of other links that open when you click one; multithreading browsers, anti-virus, and ad-blocker apps would be appreciated.

          • derFunkenstein
          • 2 years ago

          Most browsers already dedicate one thread for each tab, so JavaScript in two tabs can run multi-threaded, but it’s probably very difficult to find a way to multithread JS execution on a single tab.

            • asmithdev
            • 2 years ago

            Way ahead of you but it requires other developers make use of it. [url<]http://www.hamsters.io/[/url<]

      • caconym
      • 2 years ago

      This is clearly somebody who’s never had to stay at work late waiting for a render to finish so it can be comped.

        • Amiga500+
        • 2 years ago

        Or tried to open MS Outlook. 😀

      • just brew it!
      • 2 years ago

      Have you looked at the CPU usage of a modern web browser like Chrome lately? There are a surprisingly large number of threads running.

        • Anonymous Coward
        • 2 years ago

        Meh, people were making a very similar argument when OSX was new and dual sockets of PPC were supposed to crush P4’s. Counting threads is a crude measure of meaningful performance improvement.

    • chuckula
    • 2 years ago

    Looking at the scatter plot, it becomes pretty clear that RyZen [b<]really[/b<] likes it some L3 cache. It's a 50% jump from the 1400 [8MB L3] up to the 1500X [16MB L3] that both have 4 cores and clearly don't have a 50% clockspeed delta. However, from the 1500X up to even the higher-clocked 1600X the delta is noticeably less than 50% even with two more cores and a small clockspeed boost on the 1600X. L3 FTW.

      • derFunkenstein
      • 2 years ago

      Well, there’s that, and the fact that even with 8MB of L3, the Ryzen 1400 still has two CCXs and two cores each (edit: this is per Jeff’s comment on the part 1 review, or the Ryzen 5 product announcement post, I forget which). I think at least some of that difference could be mitigated by a 4+0 config or a single CCX, like the upcoming APUs should have.

        • chuckula
        • 2 years ago

        I think there are two different factors at play here:
        1. Total cache size.
        2. Latency when jumping out of the cache.

        The large majority of productivity benchmarks TR uses care a whole lot more about total cache size vs. latency since they mostly don’t require a whole lot of cache thrashing to move data between cores.

        However — and this is borne out in game benchmarks — other workloads are much more sensitive to cache latency when there’s a miss that requires jumping between CCXs. In that regard, I wouldn’t be surprised at all to see the RyZen APUs turn in productivity scores that are close to the 1400 while also being better at gaming than the 1500X (call it an early prediction, of course clockspeeds are also a factor).

          • derFunkenstein
          • 2 years ago

          Unfortunately I don’t have a way to disable half the cache on my Ryzen 7 1700 system to find out, unless I go with a 4+0 config. 2+2 with 16MB vs 4+0 with 8MB was very similar, but the smaller cache had the edge:

          [url<]https://techreport.com/forums/viewtopic.php?f=2&t=119316&p=1343964#p1343964[/url<] Of course that was ages ago, as Ryzen benchmarks go, since AMD has released two big AGESA updates. And no productivity. But yeah, I expect that gaming performance on the APUs to be (clock for clock) better than the Ryzen 1400, if only a little bit.

            • Anonymous Coward
            • 2 years ago

            I wonder if AMD will have an L3 on the APU. When dealing with 4 cores, has the L3 route became the clear favorite vs separate but larger L2’s?

            • derFunkenstein
            • 2 years ago

            I have to think it’ll have some form of L3, and it also seems like the path of least (engineering) resistance is to just plug a single quad-core CCX into the design. So my guess is that it’ll have 8MB of L3 and talk to the integrated graphics over the infinity fabric.

    • flip-mode
    • 2 years ago

    Of all the times I have seen some variation of “we’ll have to follow up with more on this” in a CPU review on TechReport, it feels like this is one of the very few times there has actually been a follow up article. Thanks for that. I want to let you know it did not go unnoticed. I hope that turns into a new trend.

      • derFunkenstein
      • 2 years ago

      Yeah I’m still waiting on those overclocking results from the [url=https://techreport.com/review/15818/intel-core-i7-processors/15<]first Nehalem review[/url<]

        • flip-mode
        • 2 years ago

        Seriously. In the past, every time there was a “we will revisit this in another article” statement I just rolled my eyes because that pretty much meant it was never going to happen. There are exceptions, but few enough that when they happened it came as a shock.

      • Mr Bill
      • 2 years ago

      +3 Yes, a very good followup.

      • TwistedKestrel
      • 2 years ago

      Still waiting on the Dark Rock Pro 3 article

        • geniekid
        • 2 years ago

        R9 Nano over here.

      • Anonymous Coward
      • 2 years ago

      You should see all the variations of that phrase where I work!

    • chuckula
    • 2 years ago

    This is why it pays to re-read the article with some extra detail:
    [quote<]Intel "Core i7-6800K" (Core i7-6950X with four cores disabled)[/quote<] Good to see you finally tested a chip with a big chunk of the cores disabled Jeff! 😉

    • K-L-Waster
    • 2 years ago

    I know TR doesn’t normally do overclocking tests as part of CPU reviews — but it would be very interesting to see how overclocked RyZen parts compare to overclocked Intel CPUs.

    Great review. Clearly the R5 chips have taken over the bang-for-buck crown in the mid-range.

      • chuckula
      • 2 years ago

      If you check out TR’s twitter feed from a ways back, Jeff said he got one of the 6 core parts up to about 3.9GHz on all-cores.

        • K-L-Waster
        • 2 years ago

        Yes, I saw that — I’m thinking more in terms of how does that affect the measured performance numbers. What is the net result of that clock speed?

          • chuckula
          • 2 years ago

          Back of the napkin calculation that assumes the 1600X is the baseline: About 8 – 10% improvement for heavily threaded workloads that would have driven all of the cores at 3.6GHz in the stock chip. That number may of course vary based on how well the workloads respond to clockspeed increases.

          I’m not sure if the overclock affected the top turbo boost speed, so lightly threaded loads that were going up to 4.1GHz anyway might not be substantially affected.

            • RAGEPRO
            • 2 years ago

            As I understand turbo boost is disabled when overclocking Ryzens which means the performance on lightly-threaded loads would actually be worse.

            • derFunkenstein
            • 2 years ago

            This is a truth-fact. If you don’t overclock the all-cores max to at least what Turbo can do, you’re giving up a small amount of single-threaded performance.

            • BobbinThreadbare
            • 2 years ago

            This assumes turbo boost is delivers speeds consistently higher than 3.9ghz, and not just occasionally spiking past it.

            • derFunkenstein
            • 2 years ago

            I don’t have any way of knowing for sure, but I have never seen my Ryzen 7 1700 get past 3.5GHz consistently at stock clocks, and it has a “max” boost of 3.7. So even running it at 3.7 all the time was a slight bump in single-thread performance.

            • K-L-Waster
            • 2 years ago

            Did you try anything along the lines of a benchmark with and without OC? (i.e. is Chuckula’s napkin calculation of the effect of an OC more or less accurate?)

            • derFunkenstein
            • 2 years ago

            I did some TimeSpy physics stuff, and there was definitely a difference. That benchmark is extremely multi-threaded, though.

      • ptsant
      • 2 years ago

      Ryzen is not a spectacular overclocker, but most people can manage to get the non-X chips to more than the SINGLE core turbo on ALL cores. Typically, you can expect 3.75-4.0 GHz on all cores.

      In practice, most 1700 can get close to the non-OC 1800X and most 1600 can get close to the non-OC 1600X. You won’t be able to push a 1500X to 5.0GHz, even if you watercool it. It’s not a thermal limit, but a hard limit of the architecture, whose sweet spot is around 3.0-3.5GHz.

    • crystall
    • 2 years ago

    I love the smell of competition in the morning

      • ronch
      • 2 years ago

      And I love the smell of covfefe in the morning.

    • chuckula
    • 2 years ago

    Great review as always Jeff. I agree that the 1600/1600X parts appear to be the real stars of the RyZen lineup and more broadly the 6 core segment is definitely going to see more attention going forward.

    Incidentally, thank you for waiting on the reviews to let the platforms receive some updates so that you had a better opportunity to test with faster RAM and avoid some of the hiccups we saw on launch day.

      • shank15217
      • 2 years ago

      With the new price drop the 1700/1700X part is better IMO.

        • gerryg
        • 2 years ago

        The 1700 looks nice at $299, two more cores and nice turbo freq, plus Wraith cooler and still 65W part like 1600. But I think it’s just another point on the spectrum of Ryzen good value CPUs. Depends on how much cash you have in your budget. If you don’t have $299 to spend, the 1600/1600X are great choices.

        • gerryg
        • 2 years ago

        BTW I assume we’re going to see little price drops all summer long as Intel and AMD start duking it out. Consumer #winning! I’m looking forward to the Labor Day component sales this year… 🙂

    • ronch
    • 2 years ago

    Look at those memory benchmarks. At 3200MT/s it looks like AMD has pretty much addressed the issue of having generally weaker memory controller performance. Sure there’s still a bit more latency but it’s nice to know you’re not getting something much less than what you’d get from the other company that has bajillions more to throw at R&D.

    Really, if my current rig suddenly goes belly up this year I wouldn’t even think twice about getting the 1600X.

      • DoomGuy64
      • 2 years ago

      Yup. I already went 1600X, and it’s a great system. Waiting until after the major bios issues were fixed was the right move, and the upcoming bios should theoretically allow me to enable 3600 down the road. The only real issue remaining is people who need to fill all their ram slots will likely never get 3200+ speeds. But that remains to be seen. It may be possible with relaxed timings and custom bios tweaks, who knows.

      PS. Don’t install Asus’s software because it sucks. Can’t be mentioned enough because nobody else is mentioning it, and I don’t want people to accidentally install it and screw up their OS. All of their apps and drivers are broken, do not uninstall properly, and there is no official cleaner. The only unofficial cleaner available was hosted on a google drive which is no longer accessible. If anyone has a copy of it, please re-upload it to another file hosting site.

        • ronch
        • 2 years ago

        Yup. The hardware is willing but the software is weak. Typical for Taiwanese hardware vendors, I guess.

        • freebird
        • 2 years ago

        I think it will be doable… after the 1st round of BIOS upgrades I was able to get my 4x16GB Trident Z 3000 Dimms from 2667 up to 3200, but I had to increase CL14 to CL18 to get above 2667 Mhz. [url<]https://www.newegg.com/Product/Product.aspx?Item=N82E16820232202[/url<] So if they come out with a better tuned BIOS we may just be able to push 4 sticks even higher, since currently 3200 is the limit without PCIe bus manipulation and my evo960 NVMe drive balks at more than a couple of Mhz change... 🙁 Can't wait for Ryzen 2 or 3 improvement to see what they bring on 7nm or even 5nm with the IBM announcement: [url<]https://www-03.ibm.com/press/us/en/pressrelease/52531.wss[/url<] "ALBANY, NY - 05 Jun 2017: IBM (NYSE: IBM), its Research Alliance partners GLOBALFOUNDRIES and Samsung, and equipment suppliers have developed an industry-first process to build silicon nanosheet transistors that will enable 5 nanometer (nm) chips."

      • Tristan
      • 2 years ago

      Wait for Coffe Lake with 6 cores @ 4.7 GHz and additional 10% IPC, relative to Kaby Lake

        • Krogoth
        • 2 years ago

        Coffee Lake isn’t going to coming out until end of this year though.

        You always built systems for the here and now. There’s always something faster/better on the foreseeable horizon.

          • blastdoor
          • 2 years ago

          [quote<]You always built systems for the here and now. There's always something faster/better on the foreseeable horizon.[/quote<] I know this is the conventional wisdom and I certainly see the logic of it. It's not a bad heuristic. Yet it's also true that there are points in time that are better to buy/build than others, and it's not impossible to forecast what those points are. Certainly if your computer blew up yesterday then you need a new one today and it would be silly to wait. But there are situations where the best thing to do with to wait just a little bit longer.

            • chuckula
            • 2 years ago

            If “I’m waiting for RyZen” is considered a good answer to a poll question, then “I’m waiting for CofeeLake” is just as valid.

            Or saying that both statements are equally wrong is another valid opinion.

            It’s when you start saying that waiting for some products is bad but waiting for other products is good that you run into trouble.

            • blastdoor
            • 2 years ago

            Waiting for products that represent a significant improvement seems worthwhile, waiting for products that represent a minor improvement seems less worthwhile.

            For example, waiting an extra 6 months for CoffeeLake rather than buying KabyLake makes more sense to me than waiting 6 months for KabyLake rather than buying SkyLake.

          • Cooe
          • 2 years ago

          Actually, Intel’s just announced Coffee Lake’s been pushed back to early 2018 (and even with the delay, it remains stuck on 14nm which is great news for AMD).

            • chuckula
            • 2 years ago

            [quote<]Intel's just announced [/quote<] Mind posting a link to a domain name that resolves to intel.com? And if it leads to a certain site I know that posted 17 stories about Vega launching last October, it doesn't count.

        • Cooe
        • 2 years ago

        Lol they are stuck on 14nm again, and the microarch is expected to have minimal changes compared to Skylake/Kaby Lake, so you must be smoking something extra special to be expecting a 10% IPC increase from Coffee Lake. In all likelihood IPC will be damn near identical (just like Skylake -> Kaby Lake) just with 2 more cores on the die and perhaps (but far less likely) an expanded L2 Cache like Skylake-X.

        • freebird
        • 2 years ago

        Nah, I’d rather wait for the 5nm Ryzen just around the corner…

        ALBANY, NY – 05 Jun 2017: IBM (NYSE: IBM), its Research Alliance partners GLOBALFOUNDRIES and Samsung, and equipment suppliers have developed an industry-first process to build silicon nanosheet transistors that will enable 5 nanometer (nm) chips.

          • chuckula
          • 2 years ago

          #5nmThreadRipperLaunchedFirst

      • tipoo
      • 2 years ago

      Yup. MC has been a weakness of theirs ever since after Intel addressed their head start of integrating it onto the CPU, so this is nice to see.

        • RAGEPRO
        • 2 years ago

        Even if you look at benchmarks from that time, Intel generally kept up with or beat the Phenoms and their integrated memory controller in terms of memory bandwidth. The memory controller on AMD hardware has never been worth much until now.

        I recall way back in the Athlon days that Via’s chipsets — specifically the KT133A and later KT266 (and on up through the KT600) had the best memory performance, and that’s why people stuck with them despite being considerably more flakey than AMD’s own 760 chipset. I also recall on the early Athlon chipsets that hacking them to enable interleaving gave a massive performance boost on games of the day that were bottlenecked by main memory bandwidth (e.g. Q3A.) Ahh, nostalgia.

          • Krogoth
          • 2 years ago

          Intel didn’t get a handle on integrated memory controller in terms of latency and actual bandwidth throughput until Sandy Bridge though.

            • chuckula
            • 2 years ago

            There’s a website I know that came to a different conclusion: [url<]https://techreport.com/review/17545/intel-core-i5-750-and-core-i7-870-processors/5[/url<] Note that the Nehalem i5 750 and i7 870 have two memory channels just like the Phenom II, so it's not an "unfair" comparison with the older 3-channel Nehalem parts either.

            • Krogoth
            • 2 years ago

            Sandy Bridge smokes its Bloomfield-based predecessors at memory throughput and latency bro.

            • chuckula
            • 2 years ago

            Wow, Krogoth just complimented Intel by accident.

            What you said: Intel didn’t get its memory controller competitive with AMD until at least Sandy Bridge.

            What I posted: Factual information showing that statement is wrong, Intel’s first on-die memory controllers in Nehalem a full two years before Sandy Bridge were already better than the Phenom II (which would have been at least the second or maybe third iteration of their memory controller).

            Your reply: Yeah, but your wrong because Intel made even further improvements after Nehalem!!

            My answer: YOU WIN KROGOTH YOU WIN!

            • NTMBK
            • 2 years ago

            Meh.

            • Krogoth
            • 2 years ago

            The memory controller on Nehalem-era chips had stupid compatibility issues and weren’t exactly friendly with memory overclocking (not that really needed it unless you are doing hardcore CPU overclocking). Memory controller on Sandy Bridge and onward fixed most of silly “headaches”.

            I don’t understand why you assume this has to anything to do with AMD. Just remove the silly blue-colored shades. They are making you look rather silly.

            • chuckula
            • 2 years ago

            So TR’s review is wrong. OK.

            • Krogoth
            • 2 years ago

            Performance/latency =! overclockability and compatibility.

            The review does not cover the latter items. There are countless stores of Bloomfield/Lynnfield users having stupid memory-related headaches back in the day. It was mostly trying to run factory overclock memory or CPU didn’t like certain DDR3 DIMMs. JEDEC-spec DIMMs ran flawlessly though.

        • ronch
        • 2 years ago

        I agree with Ragepro. Integrating the memory controller helps mitigate AMD’s weaker memory controller implementations. And when Intel did it too, we saw how AMD didn’t do it as well as Intel did. I remember how Randy Allen (Barcelona chief guy) practically bragged about AMD’s integrated memory controller. Like it was too hard for Intel to do.

          • sreams
          • 2 years ago

          It was probably worth bragging about, because it took AMD doing it for Intel to even consider it.

            • ronch
            • 2 years ago

            I think, even if AMD never did it Intel would’ve done it nonetheless. Maybe at a later time but Intel would’ve done it. This is also considering how everything’s increasingly getting on the CPU die.

            • sreams
            • 2 years ago

            Maybe, maybe not. In either case, it seems like Intel needs a push from time to time to actually move the tech forward. Minus competition from AMD, they don’t have much motivation to do much other than sell what they have with as little effort as possible. I guess that is an unfortunate inevitability when competition is lacking.

          • Concupiscence
          • 2 years ago

          He wasn’t wrong to be proud, an IMC was a very big deal for x86 in 2003. It took Intel a long time to deliver a followup, but it appears that they wanted to get it right… and they succeeded, especially after Nehalem.

      • credible
      • 2 years ago

      Just grabbed myself a 1600 and an MSI Tomahawk as it had 6 fan headers for my HTPC.

      I am very happy:)

    • juzz86
    • 2 years ago

    Thanks very much, Jeff. I know it’s been a hard slog for you and the team, and really appreciate you coming back to round out this one.

    A solid piece, and an honest conclusion. Cheers!

      • isotope123
      • 2 years ago

      What’s been going on lately? I’ve used tech report for years, but not sure why there’s been a “hard slog’ as you say, can you explain for me please?

        • juzz86
        • 2 years ago

        Jeff alluded to it in a couple of past posts mate, but he’s had a few things going on which delayed this article a bit.

        As it was one I have wanted to see since Part I personally, I felt a token of appreciation was in order, rather than the usual ‘better late than never’ stuff. The guys have also had trade shows and whatnot, so I definitely appreciate them squeezing this one in for us 🙂

          • isotope123
          • 2 years ago

          Absolutely they do! I’m just genuinely concerned, hope they’re all happy and healthy.

    • juzz86
    • 2 years ago

    Stinkin’ double-post. Sorry.

    • Jigar
    • 2 years ago

    Excellent review, typical TR quality, loved it. Thanks.

    On topic – AMD has produced one of the best CPUs after a long time, i hope to upgrade to Ryzen second iteration by next year.

    • raddude9
    • 2 years ago

Pin It on Pinterest

Share This