AMD’s FX-8350 processor reviewed

AMD has been going through some difficult times lately, with management changes, layoffs, and a steady drip of talent drain, as a host of familiar faces have fled for greener pastures. These things have come against a backdrop of mounting financial losses and tough questions about the company’s future and direction.

Much of the turmoil can be traced back to one big, fateful event: the difficult, disappointing birth of the all-new CPU microarchitecture known as Bulldozer. As techie types, we’re perhaps overestimating the role technology plays in these matters. Still, Bulldozer was viewed by many as AMD’s next great hope, its first from-the-ground-up new x86 CPU architecture in over a decade. When the FX processors not only failed to catch up to the competition from Intel but also struggled to beat AMD’s own prior generation in performance and power efficiency, some unpleasant fallout was inevitable.

Once the first chips were out the door, AMD’s engineering task became clear: to do as much as possible to improve the Bulldozer microarchitecture as quickly as possible. Alongside the FX processors, the firm announced a plan that included a series of updates to its CPU cores over the next few years, with promised increases in performance and power efficiency. The first of those incremental updates was dubbed Piledriver, a modest refresh that first hit the market this past spring aboard the Trinity APU. Now, roughly a year after the first FX chips arrived, revamped FX processors based on Piledriver are making their debut, more or less on schedule. That is, all things considered, a very positive sign.

The question now is whether it’s enough. Are these CPUs good enough to carve out some space in the market against some tremendously formidable competition? You may be surprised by the answer.

Vishera ain’t just a river in Russia


A picture of the Vishera die. Source: AMD.

The chip that’s the subject of our attention today is code-named Vishera, and it’s the direct successor to the silicon that powered the prior-gen FX processors, which was known as Orochi. Vishera and Orochi share almost everything—both are manufactured on GlobalFoundries’ 32-nm SOI fabrication process, both have 8MB of L3 cache, and both are essentially eight-core CPUs. The one big difference is the transition from Bulldozer to Piledriver cores—or, to put it more precisely, from Bulldozer to Piledriver modules. These “modules” are a fundamental structure in AMD’s latest architectures, and they house two “tightly coupled” integer cores that share certain resources, including a front-end, L2 cache, and floating-point unit. Thus, AMD bills a four-module FX processor as an eight-core CPU, and we can’t entirely object to that label.

Code name Key

products

Cores/

modules

Threads Last-level

cache size

Process node

(Nanometers)

Estimated

transistors

(Millions)

Die

area

(mm²)

Lynnfield Core i5, i7 4 8 8 MB 45 774 296
Gulftown Core i7-9xx 6 12 12 MB 32 1168 248
Sandy Bridge Core i5, i7 4 8 8 MB 32 995 216
Sandy Bridge-E Core-i7-39xx 8 16 20 MB 32 2270 435
Ivy Bridge Core i5, i7 4 8 8 MB 22 1400 160
Deneb Phenom II 4 4 6 MB 45 758 258
Thuban Phenom II X6 6 6 6 MB 45 904 346
Llano A8, A6, A4 4 4 1 MB x 4 32 1450 228
Trinity A10, A8, A6 2 4 2 MB x 2 32 1303 246
Orochi/Zambezi FX 4 8 8 MB 32 1200 315
Vishera FX 4 8 8 MB 32 1200 315

We covered the enhancements made to the Piledriver modules in more detail here, but the highlights are pretty straightforward. Piledriver includes a collection of small tweaks to individual parts of the module intended to increase instruction throughput. The changes range from the CPU’s front end through the cores and into the cache subsystem, and no single change contributes much more than a 1% increase in throughput. All together, the gains are maybe on the order of 6%, perhaps less, so we’re not looking at a vast improvement. Still, Piledriver includes other modifications. The FPU supports the three-operand version of the fused multiply-add instruction, a key part of the AVX specification that will also be supported in Intel’s upcoming Haswell chips. This change puts AMD and Intel on the same page going forward. (Support for the FMA4 instruction from Bulldozer is retained, at least for now.) More crucially, Piledriver has been optimized to reach higher clock speeds at lower voltages, a tweak that paid off nicely for the mobile Trinity chip. As you’ll see, it has benefited the desktop FX processors, as well.

The lineup

Model Modules Threads Base core

clock speed

Max Turbo

clock speed

North

bridge
speed

L3 cache TDP Price
FX-8350 4 8 4.0 GHz 4.2 GHz 2.2 GHz 8 MB 125 W $195
FX-8320 4 8 3.5 GHz 4.0 GHz 2.2 GHz 8 MB 125 W $169
FX-6300 3 6 3.5 GHz 4.1 GHz 2.0 GHz 8 MB 95 W $132
FX-4300 2 4 3.8 GHz 4.0 GHz 2.0 GHz 4 MB 95 W $122

The new lineup of FX chips is detailed above. Today’s headliner is the FX-8350, the only one of the four new Vishera-based parts AMD has supplied us for review. The FX-8350 shares the same power envelope (125W) and Turbo peak (4.2GHz) as the chip it supplants, the FX-8150. The most notable difference is the base clock; the FX-8350’s is a nosebleed-inducing 4GHz, up from its predecessor’s 3.6GHz.

The FX-8350’s higher base frequency should boost performance, especially in widely multithreaded workloads. Still, if you’re like me, you’re looking at the 200MHz gap between the base and Turbo peak clock speeds and wondering why it isn’t larger. The whole idea of these dynamic clocking schemes, after all, is to take advantage of the additional thermal headroom made available when not all cores are busy. Vishera can gate off power to inactive modules, granting more space for those that remain active. Higher voltages and frequencies are then usually possible within the same thermal envelope. There’s a 500MHz gap between the base and peak clocks for the FX-8320. Why doesn’t the FX-8350 offer a similar increase in peak clock frequency?

Our best guess is that too few of these chips will tolerate frequencies above 4.2GHz well enough, consistently enough, and at low enough voltages to allow AMD to ship a product in volume with a higher Turbo peak. If so, that’s a shame, because low performance in lightly threaded workloads is arguably this CPU architecture’s biggest weakness. Higher Turbo frequencies could do a lot to remedy that problem.

That said, the FX-8350’s price is quite nice. The $195 sticker positions it between a couple of Intel’s Ivy Bridge-based offerings, the Core i5-3470 at $185 and the Core i5-3570K at $225. Both are true quad-core, four-threaded processors. Of those two, only the i5-3570K has an unlocked upper multiplier for easy overclocking, whereas all of the FX parts are unlocked. On the other hand, the Intel processors have peak power ratings of 77W, vastly lower than the FX-8350’s 125W TDP.

Speaking of smaller power envelopes, the two lower-end FX models take advantage of Piledriver’s power enhancements by dropping to a more modest 95W. The chips they replace, the FX-6200 and FX-4170, are both 125W parts. The new models even sacrifice a bit of clock speed to get there. For instance, the FX-6300 is clocked at 3.5/4.1GHz, while the older FX-6200 runs at 3.8/4.1GHz. AMD tells us it expects the performance of these two parts to be similar, since the per-clock performance gains in Piledriver should make up some of the difference.

The lowest-end FX processor, the FX-4300, overlaps almost entirely with the A10-5800K desktop Trinity that we reviewed earlier this month. Both list for $122. The 5800K has a 200MHz higher Turbo peak, five more watts of max power draw, and integrated graphics. The FX-4300 instead has 4MB of L3 cache, which Trinity lacks. Then again, the A-series APUs have integrated PCIe connectivity and drop into their own brand-new socket, while the new FX series uses the same Socket AM3+ infrastructure as the prior models, so they’re really aimed at different platforms.

Our testing methods

We ran every test at least three times and reported the median of the scores produced.

The test systems were configured like so:

Processor
Phenom II X4 850

Phenom II X4 980

Phenom II X6 1100T

AMD FX-4170

AMD FX-6200

AMD
FX-8150

AMD
FX-8350

Pentium
G2120

Core i3-3225

Core
i5-2400

Core i5-2500K

Core
i7-2600K

Core i5-3470

Core i5-3570K

Core i7-3770K

Core
i7-3960X

Core i7-3820

Motherboard Asus
Crosshair V Formula
MSI
Z77A-GD65
Intel
DX79SI
North bridge 990FX Z77
Express
X79
Express
South bridge SB950
Memory size 8 GB (2 DIMMs) 8 GB (2 DIMMs) 16 GB
(4 DIMMs)
Memory type AMD
Entertainment

Edition

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Memory speed 1600 MT/s 1600 MT/s 1600 MT/s
Memory timings 9-9-9-24
1T
9-9-9-24
1T
9-9-9-24
1T
Chipset

drivers

AMD
chipset 12.3 
INF
update 9.3.0.1020

iRST 11.1.0.1006

INF
update 9.2.3.1022

RSTe 3.0.0.3020

Audio Integrated

SB950/ALC889 with

Realtek 6.0.1.6602 drivers

Integrated

Z77/ALC898 with 

Realtek 6.0.1.6602 drivers

Integrated

X79/ALC892 with

Realtek 6.0.1.6602 drivers

Processor AMD
A8-3850
AMD
A8-5600K

AMD A10-5800K

Core
i5-655K

Core i5-760

Core i7-875K

Motherboard Gigabyte
A75M-UD2H 
MSI
FM2-A85XA-G65
 
Asus

P7P55D-E Pro

North bridge A75
FCH
A85
FCH
P55
PCH
South bridge
Memory size 8 GB
(2 DIMMs)
8 GB
(2 DIMMs)
8 GB
(2 DIMMs)
Memory type Corsair

Vengeance

DDR3 SDRAM

AMD
Entertainment

Edition

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Memory speed 1600 MT/s 1600 MT/s 1333 MT/s
Memory timings 9-9-9-24
1T
9-9-9-24
1T
8-8-8-20 1T
Chipset

drivers

AMD
chipset 12.3
AMD
chipset 12.8
INF
update 9.3.0.1020

iRST 11.1.0.1006

Audio Integrated

A75/ALC889 with

Realtek 6.0.1.6602 drivers

Integrated

A75/ALC889 with

Realtek 6.0.1.6602 drivers

Integrated

P55/VIA VT1828S with

Microsoft drivers

They all shared the following common elements:

Hard drive Kingston
HyperX SH100S3B 120GB SSD
Discrete graphics XFX
Radeon HD 7950 Double Dissipation 3GB with Catalyst 12.3 drivers
OS Windows 7 Ultimate x64 Edition
Service Pack 1

(AMD systems only: KB2646060, KB2645594 hotfixes)

Power supply Corsair
AX650

Thanks to Corsair, XFX, Kingston, MSI, Asus, Gigabyte, Intel, and AMD for helping to outfit our test rigs with some of the finest hardware available. Thanks to Intel and AMD for providing the processors, as well, of course.

We used the following versions of our test applications:

Some further notes on our testing methods:

  • The test systems’ Windows desktops were set at 1920×1080 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.
  • We used a Yokogawa WT210 digital power meter to capture power use over a span of time. The meter reads power use at the wall socket, so it incorporates power use from the entire system—the CPU, motherboard, memory, graphics solution, hard drives, and anything else plugged into the power supply unit. (The monitor was plugged into a separate outlet.) We measured how each of our test systems used power across a set time period, during which time we encoded a video with x264.
  • After consulting with our readers, we’ve decided to enable Windows’ “Balanced” power profile for the bulk of our desktop processor tests, which means power-saving features like SpeedStep and Cool’n’Quiet are operating. (In the past, we only enabled these features for power consumption testing.) Our spot checks demonstrated to us that, typically, there’s no performance penalty for enabling these features on today’s CPUs. If there is a real-world penalty to enabling these features, well, we think that’s worthy of inclusion in our measurements, since the vast majority of desktop processors these days will spend their lives with these features enabled. We did disable these power management features to measure cache latencies, but otherwise, it was unnecessary to do so.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory subsystem performance

These synthetic tests are intended to measure specific properties of the system and may not end up tracking all that closely with real-world application performance. Still, they can be enlightening.

One of Piledriver’s purported tweaks is an improved hardware prefetcher, which populates the L2 cache by examining access patterns and predicting what data will be needed next. Whatever changes AMD has made on that front don’t show up in our Stream results, where the FX-8350 matches the FX-8150 almost exactly. Many of the Intel chips extract more bandwidth from the same dual-channel DDR3 memory config. With four channels, the Core i7-3820 and 3960X achieve nearly double the transfer rates.

This test is multithreaded, so it captures the bandwidth of all caches on all cores concurrently. The different test block sizes step us down from the L1 and L2 caches into L3 and main memory.

Although the FX-8350 achieves somewhat higher cache throughput than the FX-8150, we can probably chalk up the differences to the 8350’s higher base clock frequency. We might be seeing the effects of Piledriver’s larger L1 cache TLB at the 32KB block size, but it’s tough to say for sure.

SiSoft has a nice write-up of this latency testing tool, for those who are interested. We used the “in-page random” access pattern to reduce the impact of prefetchers on our measurements. We’ve reported the results in terms of CPU cycles, which is how this tool returns them. The problem with translating these results into nanoseconds, as we’ve done in the past with latency measurements, is that we don’t always know the clock speed of the CPU, which can vary depending on Turbo responses. At any rate, knowing latency in clock cycles is helpful for understanding, say, the differences between Bulldozer and Piledriver. Imagine that.

Piledriver’s memory subsystem doesn’t appear to be any quicker, on a per-cycle basis, than Bulldozer’s. In fact, the FX-8350’s caches are a bit slower at each step up the ladder.

Some quick synthetic math tests

We don’t have a proper SPEC rate test in our suite (yet!), but I wanted to take a quick look at some synthetic computational benchmarks, to see how the different architectures compare, before we move on to more varied and robust application-based workloads. These simple tests in AIDA64 are nicely multithreaded and make use of the latest instructions, including Bulldozer’s XOP in the CPU Hash test and FMA4 in the FPU Julia and Mandel tests.

The FX-8350 takes the top spot in the CPU Hash test, not surprising given the relatively strong performance of the AMD processors in this integer-focused benchmark. The more FPU-intensive fractal tests are a very different story, with the Sandy and Ivy Bridge-based chips topping the charts. Although Vishera’s four FPUs should, in theory, be capable of the same number of peak FLOPS per clock as any Sandy or Ivy quad-core, the FX-8350’s throughput here is substantially lower, even with the advantage of a higher clock speed. With the aid of the FMA instruction and a 4GHz base clock, at least the FX-8350’s four FPUs are able to outperform the six older FPUs on the Phenom II X6 1100T, a feat the FX-8150 can’t duplicate.

Power consumption and efficiency

Our workload for this test is encoding a video with x264, based on a command ripped straight from the x264 benchmark you’ll see later. This encoding job is a two-pass process. The first pass is lightly multithread and will give us the chance to see how power consumption looks when mechanisms like Turbo and core power gating are in use. The second pass is more widely multithreaded.

We’ve tested all of the CPUs in our default configuration, which includes a discrete Radeon card. We’ve also popped out the discrete card to get a look at power consumption for the A10, Core i3, and A8-3850.

The raw plots above give us a good sense of the several things, including the huge gap in between the max power draw of the AMD and Intel solutions in the same price range.

Notice how the Core i5-3570K draws virtually the same amount of power during the lightly-threaded first stage of the encoding process and the more heavily multithreaded second stage. Presumably, that means the CPU is taking full advantage of its prescribed power envelope during both stages. The FX-8150 isn’t far from that ideal, either. The FX-8350, however, draws quite a bit more power during the second stage than the first. That suggests the FX-8350 is leaving some thermal headroom on the table with its relatively conservative 4.2GHz Turbo frequency.

The FX-8350 is a sizeable chip with a hefty thermal envelope, so these results are no surprise. The basic parameters haven’t changed since the FX-8150. The test system based on the closest competition, the Core i5-3470, draws over 20W less at idle and over 100W less under load than our FX-8350 test rig.

We can quantify efficiency by looking at the amount of power used, in kilojoules, during the entirety of our test period, when the chips are busy and at idle. By that measure, the FX-8350 is an improvement over the FX-8150, since it finishes its work and drops to idle sooner.

Perhaps our best measure of CPU power efficiency is task energy: the amount of energy used while encoding our video. This measure rewards CPUs for finishing the job sooner, but it doesn’t account for power draw at idle.

Although one wouldn’t necessarily think of a 125W processor as power-efficient, the FX-8350 requires less energy to complete this task than any AMD processor before it. This is a pretty solid step forward compared to the Bulldozer-based FX-8150, particularly since Vishera is nothing more than tweaked silicon based on the same basic architecture and built with the same 32-nm SOI fab process.

Again, the competition from Intel is vastly more efficient overall—not just the 22-nm Ivy Bridge parts, but also the 32-nm Sandy Bridge chips.

The Elder Scrolls V: Skyrim

For the gaming tests, we’re using our latency-focused testing methods. If you’re unfamiliar with what we’re doing, you might want to check out our recent CPU gaming performance article, which has a subset of the data here and explains our methods reasonably well.


You can see from the plots that the FX-8350 improves upon the FX-8510 and the Phenom II X6, with more frames generated during the test run and fewer, shorter latency spikes during its duration. (For frame time plots from all of the CPUs tested, go here.)

Although the FX-8350 has the highest FPS average of any AMD processor we’ve tested, the Phenom II X4 980 still edges it out in our latency-focused metric, the 99th percentile frame time. By either measure, the FX-8350 is one of AMD’s fastest gaming chips—but you can easily see the problem with that statement, compared to the recent Intel processors. Even the lowly Pentium G2120 is faster in this Skyrim test scenario.

We suspect the Bulldozer architecture’s trouble with gaming comes down to relatively low per-thread performance in lightly threaded workloads. In many games, a single, branchy control thread tends to be the performance limiter. The FX-8150’s frame latencies spike upward for the last 5% or so of frames, which prove to be difficult for it. The FX-8350 doesn’t really change that dynamic—the spike in the last 5% is still present—but its frame times are lower across the board. The improvement is enough to push the FX-8350 slightly ahead of the Phenom II X6 1100T in the last few percentage points. That’s progress. Unfortunately, AMD has a much longer way to go in order to catch Intel’s current processors.

Before anyone panics over the gap between Intel and AMD in this latency-sensitive gaming test, we’ll want to ground our analysis in reality by considering the amount of time spent on truly long-latency frames. Once we do so, some of the practical concerns about FX-8350 performance dissipate. Virtually none of the processors spend any time working on frames for more than 50 milliseconds, our usual threshold for “badness.” That means you’re looking at reasonably fluid animation with most of these CPUs, including the FX-8350. In fact, we have to ratchet the threshold past our customary next stop, 33 milliseconds or 30 FPS, and down to 16.7 milliseconds—equivalent to 60 FPS—to see meaningful differences between the CPUs.

Batman: Arkham City


When we’re moving across the Arkham cityscape, the game engine has to stream in new areas every so often, and that difficult task is partially CPU-bound. You can see the spikes in all of the frame time plots that come at semi-regular intervals, and you’ll notice that the spikes tend to be shorter on the faster processors.

The FPS average and our 99th percentile frame time metric agree: in this tough test filled with slowdowns, the FX-8350 outperforms the prior champ from the green team, the Phenom II X4 980. The 99th percentile result tells the story: the FX-8350 delivers 99% of the frames in this test run in under 25 milliseconds, equivalent to 40 FPS.

The FX-8350’s latency curve looks quite decent, too, with a smooth and not-too-large ramp upward in the last few percentage points worth of frames.

The occasional spikes throughout the test run mean that all of the CPUs spend a little time beyond our 50-millisecond threshold, but the FX-8350 only burns 70 milliseconds working on long-latency frames during our entire 90-second test period. That’s just a few momentary hitches, and it’s less than half the time the FX-8150 spends beyond the same threshold. Still, competing solutions like the Core i5-3470 and i5-3570K reduce those hitches to nearly nothing.

Battlefield 3


Here’s a game that runs quite well on nearly all of the CPUs we tested, with one notable exception: the Pentium G2120, the only processor of the group with only two physical cores and two logical threads. The rest have at least four threads via Hyper-Threading.

The FX-8350 performs very well with this nicely threaded game engine, particularly in our more latency-focused metrics. In fact, the FX-8350 spends the least time of any CPU beyond our ultra-tight 16.7 millisecond threshold.

Crysis 2


Notice the spike at the beginning of the test run; it happens on each and every CPU. You can feel the hitch while playing. Apparently, the game is loading some data for the area we’re about to enter. Faster CPUs tend to reduce the size of the spike.

Here are more signs of life from the AMD camp. The FX-8350 outright ties the Intel competition, the Core i5-3470, in the FPS average metric, and the FX-8350’s 99th percentile frame time is only a fraction longer.

The difference in the latency curves from the FX-8150 to the FX-8350 illustrates AMD’s progress. The FX-8150 struggles in roughly a quarter of the frames, with latencies rising to near 20 milliseconds, while the FX-8350 doesn’t reach the 20-millisecond mark until the final 4% or so of frames rendered. Once we reach the last 1% or so of really tough frames, the FX-8350 essentially matches the competition from Intel.

That single big spike at the beginning of the test run contributes virtually all of the time the faster CPUs spend beyond our 50-ms threshold, as you can tell from the plots. We burned about 50% more time waiting for that one frame to finish on the FX-8350 than on the competing Intel products.

Multitasking: Gaming while transcoding video

A number of readers over the years have suggested that some sort of real-time multitasking test would be a nice benchmark for multi-core CPUs. That goal has proven to be rather elusive, but we think our latency-oriented game testing methods may allow us to pull it off. What we did is play some Skyrim, with a 60-second tour around Whiterun, using the same settings as our earlier gaming test. In the background, we had Windows Live Movie Maker transcoding a video from MPEG2 to H.264. Here’s a look at the quality of our Skyrim experience while encoding.


Well, good news and bad news here, I suppose. On the positive front, the FX-8350 outperforms any prior AMD CPU in this test scenario and delivers a fairly smooth Skyrim experience while encoding video in the background. On the downside, the FX-8350’s eight cores do not deliver a superior multitasking experience compared to the quad-core competition from Intel. Even the two-generations-old Core i5-760 is faster.

Civilization V

Civ V will run this benchmark in two ways, either while using the graphics card to draw everything on the screen, just as it would during a game, or entirely in software, without bothering with rendering, as a pure CPU performance test.

Either way you cut it, the FX-8350 remains true to what we’ve seen in our other gaming tests: it’s pretty fast in absolute terms, easily improves on the performance of prior AMD chips, and still has a long way to go to catch Sandy Bridge, let alone Ivy.

Productivity

Compiling code in GCC

Another persistent request from our readers has been the addition of some sort of code-compiling benchmark. With the help of our resident developer, Bruno Ferreira, we’ve finally put together just such a test. Qtbench tests the time required to compile the QT SDK using the GCC compiler. Here is Bruno’s note about how he put it together:

QT SDK 2010.05 – Windows, compiled via the included MinGW port of GCC 4.4.0.

Even though apparently at the time the Linux version had properly working and supported multithreaded compilation, the Windows version had to be somewhat hacked to achieve the same functionality, due to some batch file snafus.

After a working multithreaded compile was obtained (with the number of simultaneous jobs configurable), it was time to get the compile time down from 45m+ to a manageable level. This required severe hacking of the makefiles in order to strip the build down to a more streamlined version that preferably would still compile before hell froze over.

Then some more fiddling was required in order for the test to be flexible about the paths where it was located. Which led to yet more Makefile mangling (the poor thing).

The number of jobs dispatched by the Qtbench script is configurable, and the compiler does some multithreading of its own, so we did some calibration testing to determine the optimal number of jobs for each CPU.

TrueCrypt disk encryption

TrueCrypt supports acceleration via Intel’s AES-NI instructions, so the encoding of the AES algorithm, in particular, should be very fast on the CPUs that support those instructions. We’ve also included results for another algorithm, Twofish, that isn’t accelerated via dedicated instructions.

7-Zip file compression and decompression

SunSpider JavaScript performance

Ah. Now that we’ve moved past the gaming tests, we’re on friendlier ground for the FX-8350. Those eight integer cores can all contribute heavily in most of the tests above, and as a result, the FX-8350 doesn’t just match the Core i5-3570K—it rivals the much pricier Core i7-3770K. SunSpider is the lone exception to that trend, likely because not all elements of it are widely multithreaded.

Image processing

The Panorama Factory photo stitching
The Panorama Factory handles an increasingly popular image processing task: joining together multiple images to create a wide-aspect panorama. This task can require lots of memory and can be computationally intensive, so The Panorama Factory comes in a 64-bit version that’s widely multithreaded. I asked it to join four pictures, each eight megapixels, into a glorious panorama of the interior of Damage Labs.

In the past, we’ve added up the time taken by all of the different elements of the panorama creation wizard and reported that number, along with detailed results for each operation. However, doing so is incredibly data-input-intensive, and the process tends to be dominated by a single, long operation: the stitch. Thus, we’ve simply decided to report the stitch time, which saves us a lot of work and still gets at the heart of the matter.

picCOLOR image processing and analysis

picCOLOR was created by Dr. Reinert H. G. Müller of the FIBUS Institute. This isn’t Photoshop; picCOLOR’s image analysis capabilities can be used for scientific applications like particle flow analysis. Dr. Müller has supplied us with new revisions of his program for some time now, all the while optimizing picCOLOR for new advances in CPU technology, including SSE extensions, multiple cores, and Hyper-Threading. Many of its individual functions are multithreaded.

At our request, Dr. Müller graciously agreed to re-tool his picCOLOR benchmark to incorporate some real-world usage scenarios. As a result, we now have four tests that employ picCOLOR for image analysis: particle image velocimetry, real-time object tracking, a bar-code search, and label recognition and rotation. For the sake of brevity, we’ve included a single overall score for those real-world tests.

Video encoding

x264 HD benchmark

This benchmark tests one of the most popular H.264 video encoders, the open-source x264. The results come in two parts, for the two passes the encoder makes through the video file. I’ve chosen to report them separately, since that’s typically how the results are reported in the public database of results for this benchmark.

Windows Live Movie Maker 14 video encoding

For this test, we used Windows Live Movie Maker to transcode a 30-minute TV show, recorded in 720p .wtv format on my Windows 7 Media Center system, into a 320×240 WMV-format video format appropriate for mobile devices.

The FX-8350 continues to demonstrate solid advancements over the FX-8150 in every test above, but these image-centric applications are a bit more of a challenge. Only in the second pass of the x264 test does the FX-8350 manage to match or outperform its closest rivals from Intel.

3D rendering

LuxMark

Since LuxMark uses OpenCL, we can also use it to test both GPU and CPU performance—and even to compare performance across different processor types. Since OpenCL code is by nature parallelized and relies on a real-time compiler, it should adapt well to new instructions. For instance, Intel and AMD offer integrated client drivers for OpenCL on x86 processors, and they both claim to support AVX. The AMD APP driver even supports Bulldozer’s and Piledriver’s distinctive instructions, FMA4 and XOP.

We’ll start with CPU-only results. These results come from the AMD APP driver for OpenCL, since it tends to be faster on both Intel and AMD CPUs, funnily enough.

Now we’ll see how a Radeon HD 7950 performs when driven by each of these CPUs.

Finally, we can combine CPU and GPU computing power to see whether we can extract more performance with the two processor types both working on the same problem at once.

The FX-8350 decidedly outperforms the Core i5-3570K when asked to tackle the problem entirely by itself via the AMD APP ICD. Only the recent Intel CPUs with Hyper-Threading and four (or more) cores are faster. However, the Radeon is clearly more proficient at this job than any of the CPUs, and, like most of the processors, the FX-8350 is better off just feeding the Radeon data than trying to help with the computation.

Cinebench rendering

The Cinebench benchmark is based on Maxon’s Cinema 4D rendering engine. It’s multithreaded and comes with a 64-bit executable. This test runs with just a single thread and then with as many threads as CPU cores (or threads, in CPUs with multiple hardware threads per core) are available.

POV-Ray rendering

Turns out the FX-8150 is no slouch in these rendering apps, and the FX-8350’s solid gains over its predecessor allow it to place near the top of the charts, rivaling the Hyper-Threaded Intel quad cores.

Scientific computing

MyriMatch proteomics

MyriMatch is intended for use in proteomics, or the large-scale study of protein. You can read more about it here.

STARS Euler3d computational fluid dynamics

Euler3D tackles the difficult problem of simulating fluid dynamics. Like MyriMatch, it tends to be very memory-bandwidth intensive. You can read more about it right here.

Performance in these two scientific computing workloads used to track together pretty closely, believe it or not, and appeared to be primarily limited by memory bandwidth. Over time, the performance results in these two workloads have diverged as CPU architectures have diverged.

Overclocking

All of AMD’s FX processors are unlocked, so overclocking them is, in theory, as easy as turning up the multiplier. I generally prefer to overclock my CPUs using the BIOS—err, firmware—rather than the various Windows programs out there. However, more recently, I’ve taken a liking to the ease and quickness of AMD’s Overdrive utility, along with its ability to control Turbo Core behavior very precisely. So, when it came time to overclock the FX-8350, I decided to use Overdrive. I’m not sure it was the right call, but that’s what I used.

When you’re overclocking a CPU that starts out at 125W, you’re gonna need some decent cooling. AMD recommends the big-ass FX water cooler we used to overlocked the FX-8150, but being incredibly lazy, I figured the Thermaltake Frio OCK pictured above, which was already mounted on the CPU, ought to suffice. After all, the radiator is just as large as the water cooler’s, and the thing is rated to dissipate up to 240W. Also, I swear to you, there is plenty of room—more than an inch of clearance—between the CPU fan and the video card, even though it doesn’t look like it in the picture above. Turns out the Frio OCK kept CPU temperatures in the mid 50° C range, even at full tilt, so I think it did its job well enough.

Trouble is, I didn’t quite get the results I’d hoped. As usual, I logged my attempts at various settings as I went, and I’ve reproduced my notes below. I tested stability using a multithreaded Prime95 torture test. Notice that I took a very simple approach, only raising the voltage for the CPU itself, not for the VRMs or anything else. Perhaps that was the reason my attempts went like so:

4.8GHz, 1.475V – reboot

4.7GHz, 1.4875V – lock

4.6GHz, 1.525V – errors on multiple threads

4.6GHz, 1.5375V – errors with temps ~55C

4.6GHZ, 1.5375V, Turbo fan – stable with temps ~53.5C, eventually locked

4.6GHZ, 1.5375V, manual fan, 100% duty cycle at 50C – lock

4.6GHZ, 1.55V, manual fan, 100% duty cycle at 50C – crashes, temps ~54.6C

4.4GHz, 1.55V – ok

4.5GHz, 1.55V – ok, ~57C, 305W

4.5GHz, 1.475V – errors

4.5GHz, 1.525V – errors

4.5GHz, 1.5375V – OK, ~56C

At the end of the process, I could only squeeze an additional 500MHz out of the FX-8350 at 1.5375V, one notch down from the max voltage exposed in the Overdrive utility. AMD told reviewers to expect something closer to 5GHz, so apparently either I’ve failed or this particular chip just isn’t very cooperative.

I disabled Turbo Core for my initial overclocking attempts, but once I’d established a solid base clock, I was able to grab a little more speed by creating a Turbo Core profile that ranged up to 4.8GHz at 1.55V. Here’s how a pair of our benchmarks ran on the overclocked FX-8350.

A couple of other notes. First, remember that we measured peak power draw for the stock-clocked FX-8350 system at 196W in x264 encoding. The overclocked and overvolted config tested above peaked at about 262W, considerably more than the stock one. As you might imagine, when dealing with that sort of heat production, our Frio OCK was spun up like Joe Biden during the VP debate.

Second, I had hoped to include a quick Skyrim test to see how the FX-8350’s gaming performance is improved by higher clock frequencies, but when I went to test it, our overclocked config wasn’t entirely stable. The game didn’t crash, but our character moved around erratically from time to time. (I’m straining to resist making a second Biden reference here.) We’ll have to spend more time with the FX-8350 in order to find an optimal overclocked config.

Conclusions

You’ve probably gathered that the FX-8350 improves on its Bulldozer-based precursor pretty handily for a chip that’s neither a die shrink nor an all-new architecture.

The final verdict on the FX-8350 isn’t terribly difficult to render, but it does have several moving parts. As usual, our value scatter plots will help us sort out the key issues. I’ve created a couple of them for your viewing pleasure. The first one shows overall performance from our entire CPU test suite (a geometric mean), with the exception of the synthetic benchmarks back on page three. Our gaming tests are a component of this overall performance metric. The second scatter plot isolates gaming performance by itself, with our latency-focused 99th percentile frame time results converted to FPS for easy readability. On both plots, the best values will be closer to the top left corner, where prices are low and performance is high.


The overall performance scatter offers some good news for AMD fans: the FX-8350 outperforms both the Core i5-3470 and the 3570K in our nicely multithreaded test suite. As a result, the FX-8350 will give you more performance for your dollar than the Core i5-3570K, and it at least rivals our value favorite from Intel, the Core i5-3470.

Pop over to the gaming scatter, though, and the picture changes dramatically. There, the FX-8350 is the highest-performance AMD desktop processor to date for gaming, finally toppling the venerable Phenom II X4 980. Yet the FX-8350’s gaming performance almost exactly matches that of the Core i3-3225, a $134 Ivy Bridge-based processor. Meanwhile, the Core i5-3470 delivers markedly superior gaming performance for less money than the FX-8350. The FX-8350 isn’t exactly bad for video games—its performance was generally acceptable in our tests. But it is relatively weak compared to the competition.

This strange divergence between the two performance pictures isn’t just confined to gaming, of course. The FX-8350 is also relatively pokey in image processing applications, in SunSpider, and in the less widely multithreaded portions of our video encoding tests. Many of these scenarios rely on one or several threads, and the FX-8350 suffers compared to recent Intel chips in such cases. Still, the contrast between the FX-8350 and the Sandy/Ivy Bridge chips isn’t nearly as acute as it was with the older FX processors. Piledriver’s IPC gains and that 4GHz base clock have taken the edge off of our objections.

The other major consideration here is power consumption, and really, the FX-8350 isn’t even the same class of product as the Ivy Bridge Core i5 processors on this front. There’s a 48W gap between the TDP ratings of the Core i5 parts and the FX-8350, but in our tests, the actual difference at the wall socket between two similarly configured systems under load was over 100W. That gap is large enough to force the potential buyer to think deeply about the class of power supply, case, and CPU cooler he needs for his build. One could definitely get away with less expensive components for a Core i5 system.

That’s likely why AMD has offered some inducements to buy the FX-8350, including a very generous $195 price tag and an unlocked multiplier. If you’re willing to tolerate more heat and noise from your system, if you’re not particularly concerned about the occasional hitch or slowdown while gaming, if what you really want is maximum multithreaded performance for your dollar… well, then the FX-8350 may just be your next CPU. I can’t say I would go there, personally. I’ve gotten too picky about heat and noise over time, and gaming performance matters a lot to me.
Still, with the FX-8350, AMD has returned to a formula that has endeared it to PC enthusiasts time and time again: offering more performance per dollar than you’d get with the other guys, right in that sub-$200 sweet spot. That’s the sort of progress we can endorse.

Follow me on Twitter for occasional outbursts.

Comments closed
    • Super XP
    • 7 years ago

    Not sure how you are OC’ing the FX 8350 but I’ve successfully OC’ed my 8350 (1244 batch) to 4.80GHz with 1.358v w/ All 8-Cores. And at 4.70GHz w/ 1.40v w/ 8- Cores and a 277amHz bus speed. With a H100 water cooler, never went over 59C.

    With 1.5+v? That is a lot of voltage, no thanks. I did extensive Prime95 and Intel Burn testing for at least 1hr with no errors at these settings.

      • Meadows
      • 7 years ago

      Your mileage may always vary when overclocking.

    • wingless
    • 7 years ago

    I want to see this test redone in WINDOWS 8! Win 8 has optimizations that take advantage of AMD modules a little better (supposedly).

    • dpaus
    • 7 years ago

    This! Is! Sparta!!!

    (comment number, um, 300)

    • Chrispy_
    • 7 years ago

    I just retired a quartet of four-year-old i7-940’s from the office and was thinking about taking one home to upgrade the old Core2 box (I decided not to due to the impossible task of tracking down a 1366 board in mATX these days).

    Since the i7-940 is in most ways at parity with the old i7-875K used in these benchmarks, it really hammers home that you shouldn’t be gaming on AMD; Their very best efforts today still can’t catch an obsolete, four-year-old processor for a long-dead platform. Worse still, the 940 wasn’t even the fastest Intel chip back in 2008, it was just way more cost-effective than the 965.

      • Airmantharp
      • 7 years ago

      2008? My the years have passed quickly…

      I don’t even consider Nehalem old- it’s still only one step behind the 2500k that’s in my desktop now, and plenty fast with a few screws tightened.

      • Krogoth
      • 7 years ago

      Actually the truth is that CPUs don’t matter in the gaming arena anymore. It is all about the GPU.

      It has been this way since Core 2-era.

      The reason is simple. Games are heavily depended on single-threaded performance which is tied mostly to clockspeed and there’s hasn’t that been much of a jump in clockspeed since Core 2.

        • Airmantharp
        • 7 years ago

        There’s a few games I think you need to play… that no Core 2 will be fast enough for, given their overclocking ceilings.

        This assertion used to be true, but even TR’s frame-time graphs show otherwise today. Average frame-rates? Sure. Smooth frame-rates? Not bloody likely.

          • Krogoth
          • 7 years ago

          What are you smoking? There’s no game that’s “unplayable” on a Core 2/Athlon 64 era processor. Sure, the epenis scores will be lower, but these chips can still deliver a smooth gaming experience if they are driving a decent GPU.

          Newer CPUs yield higher FPS scores (peaks), which is why their average FPS scores are higher.

          However, CPUs have little affect with mininal FPS (what actually matters) scores since that depends entirely on your GPU.

          The reason is simple. The majority of games are single-threaded and those that are “multi-thread” are acutally dual-thread which means quad-core chips or greater don’t help at all. That’s why CPU architectures with a emphasis o multi-threaded applications fall behind (Bulldozer/Pilediver). Clockspeed/IPC is still king here. There hasn’t been that much improvement here since Nehalem.

            • Airmantharp
            • 7 years ago

            Start with BF3. Try and keep your frame-times below 16.7ms with an Athlon64 or Core 2. And no, I’m not talking about the benchmarks that TR runs.

            • Chrispy_
            • 7 years ago

            Actually, the whole reason I was contemplating taking an i7-940 home as an ‘free’ upgrade was because my Q9550 can’t hack current games anymore, and the Q9550 represents the upper-end of the Core2 line.

            At first I wondered if it was the GTX460, so I slotted my 7950 in just to test, but nope – games still stuttered with unacceptable framerate dips. The GTX460 is hardly a modern card, but I run most games at 720p since I’m sitting ten feet away from the screen and it’s more than adequate for that.

            • Krogoth
            • 7 years ago

            Sounds like a configuration issue or monitor refresh rate is too low.

            There’s no way a Q9550+460 combo is “unplayable” with modern games (most only use two threads) at 1280×720.

            7950 isn’t going help at 1280x720p, since the GPU itself is CPU-bound. You will see only improvement over 460 if you are gaming at 1920×1080 or 1920×1200 with healthy doses of AA/AF thrown-in.

            I could see it being a problem if you trying to run games at 2560×1600.

            • A_Pickle
            • 7 years ago

            Seriously. I just built my girlfriend a computer off of parts that I bought from The Bargain Basement. C2Q Q9550, 4 GB DDR2-1066, my old Radeon HD 4850… it plays most modern games at 1920×1080 at high settings. Not “highest,” but high. Can’t say I’m displeased.

            • Airmantharp
            • 7 years ago

            Not knocking the older parts- but at the same time, there are ‘bleeding edge’ games that benefit from the higher clocks and higher IPC of Sandy/Ivy, such to the point that there isn’t anything else (AMD or Intel) that can provide the same level of game performance.

            • flip-mode
            • 7 years ago

            BS. HD 4850 was too slow to play Borderlands on high settings at 1600×1200. Too slow to play Metro 2033 at anything approaching high. Bogged down on Crysis on high settings. And so on and so on. I don’t usually go this far, but I’ll say this: there’s something wrong with your opinion in this case.

            • Chrispy_
            • 7 years ago

            I’m kinda with Flip on this.

            Yes my C2Q [i<]runs[/i<] games just fine if I turn stuff off or dow, but I am accustomed to four-megapixel gaming at 60 uninterrupted frames a second on high settings. Even at low graphical settings, basic framerate consistency is something the Q9550 cannot produce. This is something that was raised with TR's 'inside-the-second' methodology and it's not a myth, nor me overreacting. There are benchmarks, run by professional journalists showing the sinple fact that an old Core2 architecture doesn't have the grunt to deliver [b<]CONSISTENT[/b<] evenly-timed frames. What is the point of turning up the eye-candy if, when the really impressive scene that screams out "look at me" comes along, your framerate halves. Or thirds.

            • Airmantharp
            • 7 years ago

            Pretty much- the single-threaded performance just isn’t there, and you can’t reliably overclock the 65nm and 45nm parts high enough to make up the difference. It’s the single reason I’m running a 2500k and passed my Q9550 along.

            • Chrispy_
            • 7 years ago

            Yeah, games still need good single-threaded IPC.

            Sure, some games make good use of multiple cores but at the end of the day, they’ll always be bottlenecked by the core engine functions if the core that’s running on is slow; Running multi-core stuff windowed with perfmon.exe highlights this – four cores used, but only one core pegged at 100% – everything else running on the other cores is clearly [i<]waiting[/i<] for stuff on that core, even when you eliminate the GPU bottleneck by running a Tahiti card at 720p.

            • ish718
            • 7 years ago

            It is usually the rendering engine that strains a core the most since only 1 core can feed data to the GPU at once.

            • Chrispy_
            • 7 years ago

            Probably not going to happen but I wonder if AMD/Intel/ARM are researching asymmetric multi cores processors:

            Say, for example a 9 core processor that had 8 lesser cores such as those in the new Silvermont Atom architecture, and a single primary core that represented the very best single-threaded IPC possible, like an ivy-bridge module clocked at 5GHz.

            I bet that would do very well in today’s software market where even heavily threaded things are always bottlenecked by one core.

            • A_Pickle
            • 7 years ago

            By that notion, no CPU has ever been produced up until the current generation that has met your definition of gaming… don’t get me wrong, I love 60 uninterrupted consistently delivered frames per second as much as the next man, but to say it is “unplayable” is… just quite wrong. She’s playing Borderlands 2 on the same machine, with me (though, granted, CURRENTLY she’s running at 1280×1024 on a 17″ CRT monitor right now).

            I’m not saying it’s the best. But I’m disputing the characterization that it is “unplayable.” It’s playable every bit in the same manner that folks on a budget who built a gaming system out of a Pentium G870 or an Athlon II X4 found their games to be playable: Respectably so. I’ve had friends who’ve built systems out of Radeon HD 5770 1 GB + Athlon II X4’s, and those run most modern games exceptionally well, given their market segment. Again, not [i<]highest[/i<] settings, but... to have better-than-console graphics and performance dedicated to one player is by no means out of reach of these systems, which are inexpensive.

            • swaaye
            • 7 years ago

            I have a TV game box with a Phenon II X4 3.4 and GTX 560 Ti powering 1360×768. Almost every game runs 60 fps. Crysis 2 of course needs Object Detail dropped from Ultra because of the ridiculous tessellation.

            But I have played a few games that run much better on my desktop’s 4.3 GHz Core i5. Hard Reset can get bogged down with physics effects where there’s a lot of action. Of course you can just configure the physics down a notch. Also, big multiplayer games of SupCom FA are very heavy because the simulation thread is on one core.

            • Meadows
            • 7 years ago

            Krogoth is stuck in 2007, I think.

            • Krogoth
            • 7 years ago

            No, it is more like that developers are coding titles using hardware from ~2005-2006 as the baseline.

            • flip-mode
            • 7 years ago

            No, they’re not. You present it as if all devs are pushing out the same crap, but they’re not. There are devs that push the boundaries and then there are other devs that push out very undemanding games.

    • shank15217
    • 7 years ago

    [url<]http://www.techpowerup.com/reviews/AMD/FX-8350_Piledriver_Review/4.html[/url<] This review pains a completely different power profile of vishera, can anybody provide some insight? I noticed both TR and TPU used the same boards. Of course I couldn't tell what TRU considered a "load".

      • sschaem
      • 7 years ago

      42watt iddle , 136watt load
      Thats 94watt delta, seem to match the 100w TDP.
      Make sense if they use a tool that only load the CPU and not the RAM

      So that seem 100% legit.

      TR power

      64wat iddle, 196watt load (x264)
      Thats 132watt power usage. My guess is that the ram and IO are under stress and account for the extra watt.., but 32 watt ?!?!? Could something have gone wrong on TR bench ?!?

        • Meadows
        • 7 years ago

        Techpowerup measured power from the PSU’s 8-pin ATX connector only, while TR measures [i<]complete power use[/i<] at the wall (including the PSU's inefficiency and everything).

          • sschaem
          • 7 years ago

          Check the graph again. Techpowerup did both.
          Full system & 8-pin power load.

          The number I quoted are for full power load.

          136 watt for the CPU only would be wrong and aHUGE deal.
          AMD cant sell 100 TDP part that use an actual 140 watt of power. (And they dont)

          something is still fishy. I dont see where the 32w comes from in the TR test.
          ddr3 at load cant be that much, and the SSD should be minimal ?

            • Arag0n
            • 7 years ago

            Should be the south bridge maybe, since they tested video conversion and it should stress the I/O controller.

    • Arclight
    • 7 years ago

    Hold up players. What’s the maximum temperature for this chip? I thought i’ve read somewhere it’s 70 or 72 degrees Celsius but on [url=http://products.amd.com/en-gb/DesktopCPUDetail.aspx?id=809&f1=AMD+FX+8-Core+Black+Edition&f2=&f3=&f4=1024&f5=AM3%2b&f6=&f7=32nm&f8=125+W&f9=5200&f10=False&f11=False&f12=True<]AMD's website[/url<] this info is not provided. In [url=http://www.hitechlegion.com/reviews/processors/31312-amd-fx8350?start=8<]this review (click me)[/url<] it says: [quote<]Overclocked to 5.1GHz, idle power draw was at 122W and CPU was at 25C which jumped to 325W at load with the temperature reaching 72C. CoreTemp indicates the TjMax at 90C.[/quote<] So what is it? I find this bit of info important for the overclocking crowd since if it was indeed 90, overclocking at 5+GHz would be achievable by everyone, provided they have adequate water cooling. That might improve the 8350's appeal to AMD fans even more, me thinks.

    • beck2448
    • 7 years ago

    This is STILL not good enough to draw any customers from Intel.

      • Vasilyfav
      • 7 years ago

      Why would it be? It’s worse than any quad-core intel has to offer.

      They are about 30% behind Intel in performance, and if you keep performance equal, about 50% behind on power consumption.

        • sschaem
        • 7 years ago

        cinebench
        fx-8350 6.94 $195
        i7-3770k 7.54 $329

        30% faster would mean the i7 should deliver a score above 9. Its clearly much slower than that.
        Unless you are compared it to a $1000 LGA2011 6 core CPU ????

        If you actually look at at Intel CPU around $230 the i5-3570k (6.03) the delta is actually in AMD favor.
        So you pay less $ for more performance with the fx-8350.

        The 50% at full load is correct.
        But their is nothing AMD can do about switching to 22nm trigate anytime ‘soon’.

          • maxxcool
          • 7 years ago

          150$ for a amd 1100t… and at 4ghz stable i’m getting 7.xx in cinebench

      • just brew it!
      • 7 years ago

      No, but it may at least be good enough to hold on to some of the customers they have.

    • flip-mode
    • 7 years ago

    I have to say I have a feeling that continues to gain strength that the benchmarks badly overweight multi-threaded performance. Or, to put it another way that doesn’t imply that multi-threaded performance is not important: single-threaded performance is badly underweighted. I have an X4 955 at home and an i5-2500 at work and the difference in single threaded scenarios is substantial, even with the X4-955 overclocked to X4-980 speed. Using Autodesk Revit, interacting with a building model is *significantly* more fluid on the i5-2500. All geometry displayed on screen is processed by the CPU, not the GPU, except for very specific things. So the CPU makes a huge difference in the responsiveness of the model. And it’s all single threaded. There are about 8 functions in Revit that are multi-threaded. This isn’t because Autodesk has been lazy, rather it is because most of these things are single-threaded by nature. I have a feeling that is the case with most software.

    I don’t know how this can be adjusted for other than finding a set of really valuable single-threaded benchmarks and breaking them out as another ‘button’ in the final scatter plot.

    Does anyone have any suggestions on some single-threaded benchmarks?

    Scott, would it be worth your time to contact Autodesk and see if there are any benchmarks they can suggest for any of their software?

    Edit: here’s a list of what is multi-threaded in Revit:
    [url<]http://forums.augi.com/showthread.php?118749-Revit-2011-Multithreaded-processes[/url<] Edit 2: if you want to see something that takes a while try upgrading a 650 MB file from one version of Revit to the next - been chugging along for 1-1/2 hours now and taking up about 8 GB of RAM on my i5-2500 machine and still chugging. All on a single thread. I wonder how long this would take on an FX 8350.

      • halbhh2
      • 7 years ago

      Interesting observation. I would qualify it though like this: some people are routinely doing multitasking. Not always decisive, but not negligible. For instance, I have often had a video running on one monitor (Netflix, or a live stream from C-span, etc.), while some webpage like the front page of NYTimes or YahooNews (that’s a hog) auto-refreshes on another monitor, and then I might even analyze chess (3 cores), and then do something else on my main monitor. I’d never want to go back to a dual core, etc. So tests that rely on “multitasking” in some fashion are very interesting to me.

      edit: In view of jesend’s comment below, I changed the first word from “great” to “interesting”

        • flip-mode
        • 7 years ago

        I wonder if multi-tasking benches essentially defeat the purpose of trying to get the clearest sense of single-threaded performance.

      • jensend
      • 7 years ago

      I don’t get what you’re on about.

      The gap between the X4-955 and the i5-2500 is pretty much the same whether you’re looking at multithreaded tests or single-threaded ones. They have the same core count. Yes, the X4 is quite a bit slower; that’s what you get when you compare an April 2009 AMD processor to a January 2011 Intel processor aimed at the same market point. So what?

      Your complaints about the performance of your niche workload are pretty irrelevant to whether TR’s benchmark suite is representative of overall performance. While it’s true that unless [url=http://en.wikipedia.org/wiki/NC_%28complexity%29<]NC=P[/url<] (which most complexity theorists would find unlikely) there exist tractable but inherently nonparallelizable problems, it's also true that the vast majority of tasks people actually perform which are compute-intensive enough to challenge modern processors are very strongly parallelizable. Since writing parallel software can be significantly more difficult, developers often simply haven't done that work yet. [b<]In the thread you link to, the Autodesk representative admits that this is the case for Revit- that there will be many more parallelized tasks in future versions.[/b<] Overall it seems your post boils down to whining that your particular BIM software isn't in the benchmark suite-- when the [i<]entire CAD/CAM/CAE/BIM/etc market[/i<] represents roughly half of one percent of computer users and Revit in particular less than 1/2000 of computer users.

        • flip-mode
        • 7 years ago

        I don’t want to make this personal. If my post came across as an attempt to insert Revit, in particular, into the benchmark suite, I can assure you that is not my agenda – Revit is merely what I use but when it comes to benchmarks I did to to the crowd to ask about suggestions. If you disagree that single-threaded performance is important, I hope you’ll simply state your opinion as such rather than trying to undermine me personally by suggesting I just want my own software tested. TR uses LuxMark and MyriMatch and other such benches that represent a statistically insignificant number of users, but they can still usefully illustrate how a CPU responds to different types of loads. It’s likely that much of TR’s benchmark suite is used by less than 1 in 2000 computer users, so that’s probably not the deciding factor.

        To represent my case, TR’s benches, and those at other sites, have clearly come to focus primarily on multi-threaded benchmarks. Even video games these days are often multi-threaded. That’s great and fine and understandable, and as I said, I don’t want to suggest in any way that multi-threaded workloads are unimportant – they very much are. I’m suggesting that single-threaded loads are under-represented, and I think it would be fantastic if this could be addressed in the benchmark suite and broken out in a concluding scatter plot.

        I feel somewhat safe in assuming that you disagree with me on this, but I’m unclear as to whether you disagree because you think I want TR to run my own personal benchmarks – which is not the case – or because you think that single-threaded loads are unimportant.

          • xeridea
          • 7 years ago

          There are some single/lightly threaded benchmarks listed. x.264 pass 1 for example, and Sunspider. This can only be single or lightly threaded by nature. Of course 3770k wins but it cost a lot more, and the 8350 crushes is it in pass 2 where it is heavily threaded. This 8 core CPUs main focus is heavily threaded workloads, but is ok at others, so I find multithreaded benchmarks relevant. Of course this is only 1 application, others may be different, but the future is more and more leaning towards multithreading, and single threaded apps for the most part are fast enough on current CPUs its not a huge issue in general.

            • accord1999
            • 7 years ago

            The 8350 wins by less than 7%, considerably less than what the 3770K beats the 8350 in other heavily threaded applications like Euler or picColor.

            Even if the future is multi-threading, the 3770K matches the 8350 at 8+ threads and is faster at 7 or less threads. This advantage grows to over 50% at 4 threads.

      • chuckula
      • 7 years ago

      [quote<] Does anyone have any suggestions on some single-threaded benchmarks?[/quote<] OpenSSL for one since that is single threaded and people use SSL on a daily basis so it has some relevance.

      • Stranger
      • 7 years ago

      ” if you want to see something that takes a while try upgrading a 650 MB file from one version of Revit to the next – been chugging along for 1-1/2 hours now and taking up about 8 GB of RAM on my i5-2500 machine and still chugging. All on a single thread. I wonder how long this would take on an FX 8350.”

      You sure that that work load is processor constrained? It sounds memory/io constrained to me.

        • flip-mode
        • 7 years ago

        Pretty sure I’m not bumping into memory or disk but not 100% certain. I’ve got 24 GB of RAM, no page file (so I’m assuming that means no going to disk), and an SSD. Memory usage was about 8 GB for that process. I had task manager open and task manager was pegged at 25% the whole time – a.k.a. single processor core – with occasional spikes when I was doing something else. But it’s a good question.

    • HisDivineOrder
    • 7 years ago

    The conclusion seems to be struggling to find a way to endorse the CPU in light of the fact that AMD is about to disappear from this world. I get why you feel the need, but falsely suggesting there’s ever a reason to waste as much power and heat on a chip with as little performance as this one in all the key areas is disingenuous.

    AMD needs to realize the world is not where it was 5-10 years ago. We want our chips to sip power. People who buy high performance chips mostly want gaming at this point since just about any chip’ll do for anything less for the majority of consumers. And even today multitasking is limited to quad-core for the majority of software.

    For the majority, there’s no compelling argument for the Bulldozer or its Piledriver kin. AMD should scrap the whole line, pull an Intel post-Netburst and return to their older architecture asap while optimizing it Pentium M-style.

    • Arclight
    • 7 years ago

    [url<]http://www.tomshardware.com/reviews/fx-8350-vishera-review,3328-2.html[/url<] Maybe some more investigation into overclocking should be done in a seperate article. It would be fun if it involved undervolting while at stock speeds. The NB voltage bump seemed to go a long way for the review sample for tom's hardware. Edit. Since this chips need high Voltage to get to high speeds, why oh why hasn't AMD made them with a TJ max of around 90 degrees Celsius like Intel did with Nehalem? Had the TJ max been higher, getting to 5+ GHz on liquid cooling should have been a breeze (i presume from what tom's hardware article says). Edit 2: Turns out AMD hasn't confirmed yet [s<]TJ max[/s<] Tcase max

    • chuckula
    • 7 years ago

    Not that this is a substitute for a full TR test, but HardOCP did do a limited run of benchmarks where the FX-8150, 8350 and i7 2600K and 3770K were all clocked at an even 4Ghz. Interesting results showing what the actual IPC advantage for Piledriver is over Bulldozer here: [url<]http://www.hardocp.com/article/2012/10/22/amd_fx8350_piledriver_processor_ipc_overclocking/3[/url<] Edit: From my reading it looks like the jump from Bulldozer --> Piledriver is greater than from Sandy --> Ivy, but not breathtakingly larger. It is very clear from these results that about 2/3 to 3/4 of the performance delta we see from the FX-8350 is from the clock speed boost with the rest coming from IPC improvements.

      • ronch
      • 7 years ago

      On page 1 of that review, there’s this slide:

      [url<]http://i47.tinypic.com/10cvtav.png[/url<] Look at how the cores are labeled. Not even close. I wanna kill the marketing guy who did the overlay of the die. Marketing obviously can't understand a thing about technical stuff.

        • Arclight
        • 7 years ago

        I don’t know how accurate that picture is but quick stupid question: If they gave up on the L3 cache wouldn’t there be enough space for another 2modules (4 cores) with another L2 cache to serve them? Or if they did that there wouldn’t be any space left for the uncore?

        Edit
        I just had a sudden realisation regarding the red colour in the FX logo, it represents searing heat coming from the CPU.

          • ronch
          • 7 years ago

          I assume you’re talking about being able to fit two more dual-core modules in exchange for the L3 in the same 315mm2 area, and the answer is probably no. Perhaps one module will. Go beyond 315mm2, yes, two modules or even more will fit.

      • abw
      • 7 years ago

      Your claims are sh.it.ty viral marketing , here HFR results , clock to clock
      and using actual softs and games made by true men not intel s suckers….

      [url<]http://www.hardware.fr/articles/880-6/bulldozer-vs-piledriver-4-ghz.html[/url<] Try better next time , intel employee....

        • Arclight
        • 7 years ago

        OMG the witch hunt i predicted. Quick let’s weigh them.

          • abw
          • 7 years ago

          Rather a troll hunt , the guy already showed its true ugly face
          earlier in this thread…

            • chuckula
            • 7 years ago

            Hey idiot. I’m going to take TR’s reviews a lot more seriously than some random French website you dug up… did you have to spend the entire day looking for a review that puts your beloved AMD in the best light?

            Stop acting like you are some neutral person who cares so much about the “truth”, I’ve seen your posting history going *way* back and your desperation to prop up AMD at all costs is getting annoying. Here’s a trip down memory lane for you: [url<]https://techreport.com/news/20725/intel-unveils-10-core-32-nm-xeon-processors?post=546982[/url<] Ooh Look.. there you are saying the Bulldozers performance is so great that Intel is in a panic and adding more cores just because of how great AMD is... and you call me a shill?

            • abw
            • 7 years ago

            Calm down troll , excess of pills can kill….

            If you think that an intel 6C/12T is enough to battle a 16C opteron then you are even more sucking…

            As for intel s xeons , we all know why they seems “better” under windows , only suckers
            still act and speak as if there was nothing.

            [url<]http://www.anandtech.com/show/2978/amd-s-12-core-magny-cours-opteron-6174-vs-intel-s-6-core-xeon/7[/url<] [url<]http://en.wikipedia.org/wiki/Intel_C%2B%2B_Compiler[/url<]

            • NeelyCam
            • 7 years ago

            You sound like like a much bigger troll than Chuckula. And all the insults..? Not very classy

            • abw
            • 7 years ago

            Bigger , no , i think that you were better at this than me , as proved by your record
            by there , although it seems that there s much more water in your wine
            theses days…
            As for the insults , you are intelligent enough , i hope , to notice
            the timing of this scenario…

          • NeelyCam
          • 7 years ago

          Burn him! That way we can prove if he is a witch or not

          • sschaem
          • 7 years ago

          In this case I dont think chuckula floats.

            • chuckula
            • 7 years ago

            Lemme check… <splash><bubbles>…. … … … .. . . … <Rescue Chopper Shows Up>

            Yup, I sure don’t float. Therefore, I’m probably not a very small rock, a church, or made out of wood.

        • NeelyCam
        • 7 years ago

        [b<]Are you trying to bypass the word filter?[/b<] TR folks hand out bans for stuff like that.. Also, what's with the attitude? I don't read french, but the table seems to be comparing some memory benchies, and the two are pretty much equal. Maybe I didn't get your point...?

          • abw
          • 7 years ago

          Browse a little through the page…The graphs are below the memory benches….

          No need to understand french , the numbers speak for themselves.

            • NeelyCam
            • 7 years ago

            What’s the difference between red bars and green bars?

            • abw
            • 7 years ago

            Red bars are applications and greens are games , it s rather obvious
            since there s the names on the left.

            The two upper bars noted “moyenne” (mean) are the average ,
            the red one “moyenne applis” is average for applications and the green
            one “moyenne jeux” is the average for games.

            For apps the average is 7.7% for games it is 13.5%.

            This is a comparison at a fixed 4Ghz frequency , so clock for clock.

        • clone
        • 7 years ago

        why all the hostility?….. both the HardOCP article and your link complement one another, they both show the new AMD tweaks make for solid gains over the previous generation.

        hardOCP also mentioned that the new core uses 15 watts less at load than the previous generation which is equally nice.

        I have both AMD and Intel rigs and will most likely be replacing the old tri core cpu for one of these cpu’s…. I was thinking a 6xxx in a month or so once the prices adjust to stable post release levels….. I may even go with an 8xxx.

          • abw
          • 7 years ago

          The hostility as you call it came from an earlier exchange in this thread ,
          below on the page.
          The guys came insulting and spreading falsehood about me so i gave him a taste of his own trolling in this one.

            • NeelyCam
            • 7 years ago

            Chuckula’s AMDZone comment was unnecessary, but I think you both went a bit overboard with the personals – not only down there, but also up here..

            Can’t we all just get along…?

            • clone
            • 7 years ago

            ok I found it… thx, understood.

        • sschaem
        • 7 years ago

        turn the page to 4 & 5

      • sschaem
      • 7 years ago

      I agree 2/3 is about right as a rule of thumb.
      10% from clock 5% from architecture.

      Check TR number, 15% seem to be right on.

    • 12cores
    • 7 years ago

    Great overclock guys, way to get in there and overclock the chip. Next time just don’t bother overclocking the cpu, most other sites seem to hit 5ghz at around 1.45-1.5v under water, ridiculous review.

      • rxc6
      • 7 years ago

      Ridiculous comment.

      • ermo
      • 7 years ago

      Let me get this straight: You feel the review is ridiculous because the overclock was limited?

      From my perspective, Scott cooked up a balanced, interesting review which highlights the performance characteristics of PD vs. BD and its competition nicely.

      • NeelyCam
      • 7 years ago

      Your name is misleading – it should say ‘6cores’ or ’12threads’

        • maxxcool
        • 7 years ago

        LOL! snorked my coffee on that one…. very nice 🙂

        • clone
        • 7 years ago

        fury is fogging his mind….. troubled is this one.

      • clone
      • 7 years ago

      you’re still mad because you were banned under a previous sig.

      their is some significant value in knowing the cooling requirements of the new chip and while they didn’t hit 5ghz they got close enough to make the differences insignificant.

    • juampa_valve_rde
    • 7 years ago

    Talking for talk… If AMD had choosen a more hyperthreadesque design it wouldnt be so hard to perform on single thread, retaining the module concept. Example: a module with 2 cores but one of the cores having a hardwire to the alus of the second (companion core) like the fma units, plus a decoder smart enough to issue twice the ops to the main core when needed, retaining high performance on single thread but balancing automagically on full threaded load. That would require a single but fat decoder per module (current is single and fat but cant do what i think), alu/fma sharing (only on fma right now).

    Is steamroller the one that will have 2 decoders per module?

      • loophole
      • 7 years ago

      Yep, with Steamroller AMD is moving to two decoders per module.

    • halbhh2
    • 7 years ago

    My cost equation looks like this: My 970 AM3+ motherboard and old powersupply can handle the 8350 quite easily, and my old $100 phenom II is fine for now, but what if for some new situation, I needed more cpu speed? Like 50%, 60% more cpu throughput. A 8350 would do that in my case.

    So, if I eventually want a boost, and if I consider power cost of a 8350 over time, it will look like this:

    if I get a 8350 drop-in and then burn an extra 80 or 100 watts when maxing the cpu (compared to building a more power efficient rig with new parts) for example for 4 hours/week, that will cost around 6 cents/week extra or nearly $5/year more in power cost. Or a worse case: I need to double or triple my max-cpu usage time, maybe even 12hours/week maxing out, then it would cost $10 to $15/year in extra power bills here vs an intel rig.

    But, it is a drop-in upgrade.

    So at some point, this rig will eventually have one, when the price is right.

    • TurtlePerson2
    • 7 years ago

    I would really like to see power consumption numbers for a range of different clock frequencies. A technical paper I read about the resonant clock meshes in the Piledriver architecture seemed to gloss over the disadvantages of resonant clock meshes when it comes to overclocking.

    Basically, AMD is using inductors on their chips to create an oscillator, which uses less power than a typical clock distribution network. They use some elements of a typical clock distribution network to set the exact frequency that the oscillator operates at. The problem is that if your oscillator is designed for 3 GHz and you run at 4 GHz, then you’re very little power savings from the clock mesh.

    If you have time Scott, I would be very interested to see the relationship between frequency and power consumption when voltage is held constant. In a typical chip this relationship should be linear, but that may not be the case with Piledriver. I’m curious to see if AMD has chosen to run at the optimal frequency for power consumption or if users might get more efficiency by increasing or decreasing their clock frequency on this chip.

      • NeelyCam
      • 7 years ago

      [quote<]Basically, AMD is using inductors on their chips to create an oscillator, which uses less power than a typical clock distribution network.[/quote<] It's not really an oscillator, but a tuned load. [quote<]The problem is that if your oscillator is designed for 3 GHz and you run at 4 GHz, then you're very little power savings from the clock mesh.[/quote<] Depends entirely on the Q-factor of the tuning, and also on if the resonant frequency is adjusted for other frequencies through switching on/off capacitors and/or inductors, which I think is exactly what AMD is doing. Somebody (you?) linked an interesting paper written by the IP company that came up with this clock network. It was in one of the Trinity comments. Bottom line: it saves a bit of power at full blast and high frequencies, but does little at low frequencies.

        • just brew it!
        • 7 years ago

        Since the capacitance is inherent in the clock mesh itself (i.e. fixed), I imagine they’re using multi-tap inductors to tune the circuit. Just provide multiple drivers at different points along the inductors and switch them in/out as needed to adjust the resonant frequency.

        It’s like an old school [url=http://en.wikipedia.org/wiki/Crystal_radio<]crystal radio[/url<] that could be tuned by moving a slider along the turns of the coil.

          • NeelyCam
          • 7 years ago

          If I remember the paper right, the inductors had some tap points, but they were essentially AC-grounded (big cap).

          I’m not sure if effectively moving the [i<]driver[/i<] to a different point of the inductance would work - that would turn the network is a series LC resonator instead of a parallel one.

            • TurtlePerson2
            • 7 years ago

            Here’s a paper with a quick (1 page) explanation of their resonant clock mesh:

            [url<]http://ewh.ieee.org/r5/denver/sscs/References/2012_02_Sathe.pdf[/url<] In the fourth paragraph of the paper they mention "inductor connection" as a parameter, which would seem to imply that the inductors are connected/disconnected to meet the desired inductance. Even with this "tunability" there is still a point of maximum efficiency. In the ISSCC paper (linked above) they found the most efficient frequency to be ~3.4 GHz. I'm really curious as to whether or not those research numbers translate into real world power consumption numbers.

            • just brew it!
            • 7 years ago

            Thanks for that link. Interesting stuff.

    • link626
    • 7 years ago

    FX is competitive, i.e It’s good enough in most cases.

    But jesus, that power draw. In some cases, double the power, and half the performance. That is awful.

      • mnecaise
      • 7 years ago

      Put a positive spin on it… Look at it this way: Winter heating bills will go down.

        • ronch
        • 7 years ago

        And what if you don’t live in a frigid country? Cooling costs will go up.

          • mnecaise
          • 7 years ago

          Use liquid cooling, run lines through the wall, and put the radiator outside. Or, pump the heat into a hot water storage tank, use the water to make coffee. Channel that inner engineer and find a creative solution.

          Edit: typo

      • halbhh2
      • 7 years ago

      You do have to pay attention to what you are using the cpu for, how you are using it, if you want to have a clear idea of power costs.

      For Deep Fritz chess analysis (for instance see the X-bit Labs review) I have a clear idea what the 8350 performance is, and can make a clear comparison for my own need: chess analysis. (8350 is very close to i7 3770 on this one; see page 4 at X-bit)

      5 hours week chess analysis * 90 watts more power draw * 16 cents/Kwh = about $5/year extra cost for electricity.

      So, 1 year, $5 for electricity extra. It does not matter if the precise number is $3.89 or $7.40, the conclusion is the same: very cost effective drop-in upgrade for Am3+ rigs that need more speed than their old x3 or Athlon II x4, etc.

        • Sam125
        • 7 years ago

        That’s an interesting way of looking at it. The increased power draw isn’t a factor when it comes to money. $5/yr is a pittance compared to some of the other drawbacks of owning such a hot running CPU. The increased heat and associated noise are what would be a massive drag to use during a hot summer or as a bedroom computer.

          • halbhh2
          • 7 years ago

          Good point. My office has been a low-flow vent area for the air conditioning, and gets warm even just from 3 monitors, and my rig at idle on warm days. But even getting down to 100 watts total wouldn’t really change that a lot. But….I tend to do little chess analysis in the heat of summer. It’s more a cold-day thing anyway.

    • Disco
    • 7 years ago

    So, should I be picking up some AMD stock yet? How much lower can it go?

      • just brew it!
      • 7 years ago

      Well, it can’t go negative. So the answer is clearly zero.

        • tfp
        • 7 years ago

        Now this is the kind of response I like to see

        • jensend
        • 7 years ago

        “Bureaucrat Conrad, you are technically correct — [i<]the best kind of correct[/i<]."

      • Meadows
      • 7 years ago

      Value doesn’t matter, just follow the graph pattern and ride trends.

      “Price action” strategy would be this: follow their stock price, check if it hits an imaginary “bottom” twice. This can be a sign that it “doesn’t want to go lower” and that’s the time you should invest and [i<]hope for the price to start rising[/i<]. If a rise does happen, wait until it starts dropping again (enough to undo the increase of the preceding 1-2 days) and sell them again as if their shares were burning with the vigour of a thousand suns, because it may just be a dead cat bouncing. (I hope not though, I like AMD.)

        • ermo
        • 7 years ago

        oops.

          • Sam125
          • 7 years ago

          I read what you wrote earlier and you’re right. Investing off of price-action isn’t investing for the vast majority of people, it’s gambling. It’s trying to beat the market at pricing a security in the very short-term unless you have solid evidence/analysis/hunch that a company can improve on its fundamentals and/or add value for their shareholders or at the very least, increase its book value while maintaining an attractive debt to asset ratio.

          So if an investor doesn’t do the bare minimum of research then they’re investing blind and are quite literally gambling.

          If you’re like me though, price-action/day-trading is too much work, although some people really have a good instinct for it. It’s too much work for me though and I prefer to invest with the macro trends on index funds. Much safer and easier IMHO… but it’s not as much fun and there aren’t as many bragging rights for scoring a ten bagger or even doubling your investment =) (Of course there’s always the risk of losing all of your money, but that’s implicit).

            • Meadows
            • 7 years ago

            This isn’t gambling in the strictest sense, more like “educated guessing”. In gambling, past draws will never be useful for predicting future ones, whereas in trading some patterns always emerge.

            I didn’t say [i<]day trading[/i<], this isn't forex or anything. I guess checking the prices every other day (or once a week) is still enough for the above plan. I figured Disco seemed quite clueless in his initial comment, so I didn't want to throw a stock economy book at him right away. (I do suppose that price action probably works better for currencies, since they can't fall forever and the chance of any hyperinflation today is microscopic, so there are greater odds for gaining.)

        • Disco
        • 7 years ago

        Meadows, thanks for the strategic advice and the imagery (dead cat bouncing… lol!)

    • Alexko
    • 7 years ago

    I think the conclusion is a bit unfair. It briefly talks about the average, then discusses at length all the instances in which Vishera falls short of it.

    But it’s an average, that means there are also plenty of times when it does better, and those aren’t even mentioned. I mean there are benchmarks where the FX-8350 beats the much more expensive i7-3770K. Surely that deserves to be at least mentioned.

    The review is otherwise great: good, varied suite of tests, TR’s signature latency-aware protocol for games, all very good stuff.

    • WaltC
    • 7 years ago

    Accurate, fair review of Piledriver–it’s what I recently bought my MSI AM3 + board for–as Zambezi was not that immediately intriguing. Will be ordering in ~ ten days or so–probably an [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16819113285&name=Processors-Desktops<] FX 8320 [/url<] since they are all black editions (if I read that correctly.) I don't know about you guys, but I am more than ready for this economic crash/downturn to end and to see the tech cycles rejuvenate! I've had my fill of cell phones and castrated laptops w/finger-lickin'-good, greasy touch screens as placeholders and placebos standing in for [i<]real technology[/i<] advances. (Seriously, how many of you were also fortunate enough to read last week in the latest [i<]Penthouse[/i<] Peek-a-Boo "Gentleman's Green & Wet-back Advisor" (TM)--a financial newsletter for discerning men of opportunistic taste and ambidextrous sleight of hand--that this miniscule slowdown in the AMD engineering juggernaut should present Intel with at least a 12-month window of opportunity in which to try and catch up...? Smashing read, w'ot? Competition is good for all of us. Go Intel--catch up, baby! Keep AMD honest on the pricing front!)

      • ronch
      • 7 years ago

      I think WaltC has had too much to drink.

    • phez
    • 7 years ago

    Is there a reason you have the cpu cooler orientated towards the video card … ?!

      • Damage
      • 7 years ago

      No other way to mount it on the board.

        • Airmantharp
        • 7 years ago

        Given that this was a CPU test, I fully understand not wanting to mess with the setup. It’s not like you can make up for AMD’s lacking here :).

        Still, I had a ‘Picard hand-to-face’ moment when I saw that.

          • ermo
          • 7 years ago

          [quote<]"Still, I had a 'Picard hand-to-face' moment when I saw that."[/quote<] I'm guessing this is why Scott found it necessary to explicitly point out in the accompanying text that there was an inch of clearance between the cooler and the gfx card, even if it wasn't really obvious on the photo?

            • Airmantharp
            • 7 years ago

            I did read that article- an inch of clearance is terrifying. It means that the CPU cooler is dumping it’s heat, significant amounts for the AMD CPUs especially when overclocked, directly onto the hottest exposed part of the GPU. We both agree that it’s bad design, but also that it doesn’t make a damn bit of difference here :).

            • ermo
            • 7 years ago

            I was actually imagining that the cooler was set up to push the air towards the right (top) side of the motherboard as shown in the picture?

            From your comments, it appears that you think that the cooler is drawing in cool air from the right (top) side of the motherboard and pushing it onto the back side of the GPU circuit board, is that correct?

            @Scott: Out of curiousity, which way did the air flow on your CPU cooler setup?

            • Airmantharp
            • 7 years ago

            No experience with the cooler- by the time they got that big, the integrated water-coolers started looking much more attractive when considering the design of the whole system. I think you’re right though, since fans push air towards their label, the cooler would be pulling air from ‘below’ and exhausting ‘up’. Airflow from left to right, in the picture in the article.

            And that’s actually a little more disturbing, from a design perspective, but at the same time immediate overclocking stability isn’t really affected by small differences in temperature. It’s only when you put the screws to a CPU to you really get to see whether or not it can handle.

        • Arclight
        • 7 years ago

        Actually, with photoshop you could.

    • ermo
    • 7 years ago

    @Damage:

    Would it be possible to create a non-gaming value scatter plot, just like you have a gaming-only scatter plot?

    The way I see it, the 8350 is a pretty decent multithreaded workstation chip for the money, once you factor in its ECC support.

      • maasenstodt
      • 7 years ago

      I agree. As someone whose gaming is mostly limited to turn-based strategy titles, factoring in twitchy games on that chart doesn’t help me much.

    • OneArmedScissor
    • 7 years ago

    [i<]Still[/i<] a 2.2 GHz L3 cache?!? What is this I don't even... I would like to see at least a brief test where the clocks are more or less synchronized, even if it means lowering the CPU clock. It's looking like there's not much about the design that allows for higher clock speeds, and more like AMD's marketing insisting that they screw up the desktop version just so they can say it's a few hundred MHz faster than Intel.

      • ermo
      • 7 years ago

      How far, I wonder, can one push the L3 cache? For the Phenom II, 2.6 GHz L3 was achievable if you bumped the IMC voltage slightly and made sure you had a good cooler and it tended to make a measurable difference.

      Did anyone notice if all the other Vishera reviews delved into this?

        • Waco
        • 7 years ago

        I haven’t seen any yet that have really played with overclocking to find the weaknesses of the architecture.

      • ronch
      • 7 years ago

      It would be nice if AMD ran the L3 and Northbridge at the same clock as the CPU cores just as Intel does it with SB/IB, but that would probably send TDP levels off the roof if the said circuits are even capable of reaching 4.0GHz in the first place. This reminds me of when Intel and AMD ran their L2 at a fraction of their CPU speed back in the days of the Pentium 2/3 and Athlon.

        • OneArmedScissor
        • 7 years ago

        Since the CPU voltage could be dropped from 1.3v+ by only knocking off a few hundred MHz, there might be a net gain from the L3, but at the same TDP. You don’t know until you try!

        I suspect that’s not stock because it might lose the handful of narrow wins in highly threaded benchmarks. AMD want / need all the marketing bullet points they can get, and to not look [i<]too[/i<] silly for all those "cores." :p I really pointed that out just because I'm surprised they didn't so much as bump it to 2.4 GHz. It may not be helpful in BD / PD for the L3 to be synchronized, which would also require leakier transistors. For Intel, it's different. There's also the ring bus, which complicates switching power states, and has specific relevance to the GPU in SB / IB and extra memory channels in SBE.

          • ermo
          • 7 years ago

          But what happens once you downclock the modules for use in denser (8M/16C) packages? Well, the power consumption per module drops (probably drastically) and the ratio between the L3 cache speed and the base clock frequency improves as well.

          This suggests to me that PD might actually be balanced slightly better in its server guise than in its desktop guise. But this is just speculation at this point.

    • jdaven
    • 7 years ago

    So who’s up for a GHz edition Radeon 7970 with 12.11 drivers, an FX-8350 and 16 GB DDR3 1866 gaming rig?

    • Althernai
    • 7 years ago

    It looked fairly impressive to me at first. Sure, the single-threaded performance is still terrible, but at least it wins in the multi-threaded integer workloads it was designed for.

    However, having thought about it some more, it doesn’t look that good. AMD is using twice the die space and twice the power to achieve results similar to Ivy Bridge. Even relative to Sandy Bridge (so 32nm vs. 32 nm), the performance is uneven and where Piledriver does win, it still does this by much less than you would expect based on the die size and power draw.

    The architecture [i<]is[/i<] an improvement on Bulldozer, but it is still very, very far from Sandy Bridge. Piledriver covers this up with high clocks and a huge die. It needs both of these to have a chance: single-threaded applications are a lost cause and even multi-threading at the level of games is too little. The resemblance to the Pentium 4 is still uncanny; the only difference is that with the P4, Intel did not have the option of putting many of them on a huge die.

      • just brew it!
      • 7 years ago

      [quote<]The architecture is an improvement on Bulldozer, but it is still very, very far from Sandy Bridge. Piledriver covers this up with high clocks and a huge die.[/quote<] And lower prices.

    • Spotpuff
    • 7 years ago

    Not a comment on AMD or this review in particular, but was wondering if you could maybe add Lightroom exporting from RAW/DNG as a benchmark. It seems to be largely unaffected by storage system (according to adobe itself, SSDs don’t help much) and a lot more people probably use it than panostitching.

      • Mr Bill
      • 7 years ago

      Seconded; also, would like to see autotone and render applied to a meaningful number of raw photos.

    • chuckula
    • 7 years ago

    Vishera looks like a success if only for one reason: With Bulldozer we had a huge amount of over-promising and under-delivering. With Vishera we have just the opposite, and the chips will help AMD stay in the game.

      • ronch
      • 7 years ago

      They’ve been over-promising since Barcelona. The new management team seems to be aware of this risk and is refraining from doing so. Perhaps this is one example of why AMD is ditching its old management and bringing in new blood.

    • wof
    • 7 years ago

    Is there actually a Sandy Bridge-E cpu with 8 cores / 16 threads as the table in the review says, or is it just a typo or referring to disabled cores ?

      • TO11MTM
      • 7 years ago

      If you bring money, sure you can get one!

      [url<]http://ark.intel.com/products/46499/Intel-Xeon-Processor-X7560-24M-Cache-2_26-GHz-6_40-GTs-Intel-QPI[/url<]

      • chuckula
      • 7 years ago

      There are Xeons that are effectively SB-E chips with all 8 cores turned on. Apparently some of the Xeon models are compatible with LGA-2011 desktop/workstation motherboards. You are right that at the prosumer level that non-Xeon extreme parts are currently limited to 6 activated cores.

    • ronch
    • 7 years ago

    I’ve always thought the Bulldozer architecture wasn’t really a bad architecture. It’s only when you compare it to Sandy Bridge/Ivy Bridge that it runs into trouble. It’s a very different processor from what we’ve all been used to and I take it as that. It’s not for gaming, but neither is the Itanium. Bulldozer is designed for other things, and as such, it’s a bit unfair to pit it against Intel when it comes to things that it really wasn’t meant to be good at. Plus the fact that AMD was able to pull off such a complex piece of silicon given its resources further bolsters Bulldozer’s validity.

    It’s good to see AMD able to take Bulldozer up a notch and not miss its 15%/year performance improvement target. The Master of Paper said it’s not gonna be enough, and I hope Steamroller will net more than 15% performance improvement next year. Let’s hope it does. There’s never been a direr period in AMD’s history and they can really use all the performance they can get out of the Bulldozer architecture.

    • vaultboy101
    • 7 years ago

    Nice review Scott and many thanks.

    However not overly impressed with this CPU though when the Intel alternative is both faster and cheaper.

    The ARM chips on Windows 8 RT are looking more interesting than AMD’s products these days and to be honest I have a feeling that the ARM SoC’s are not massively behind AMD in per core IPC.

    Looking forward to TR and Windows 8 RT CPU testing!

    • chuckula
    • 7 years ago

    Hrmm… according to online reviews, the FX-8350 has exactly the same die size (315 mm^2) and pretty much the same transistor count (~1.25 Billion) as the FX-8150. This means that each FX-8350 is almost double the size of a full-size 3770K die (~162mm^2), which includes a GPU just for fun.

    When you look at the prices for the 8350, it makes you wonder how much money AMD is really going to be seeing from these chips.

    EDIT: Instead of downthumbs, a thoughtful response about AMD’s strategy would be more appreciated. I just pointed out some facts that AMD’s management is well aware of and would like to hear ideas about AMD’s strategy for making money in a competitive environment.

      • sschaem
      • 7 years ago

      “AMD’s strategy” ? its survival.

      Intel set its price, AMD adjust according to what it can deliver.

      And its 22nm vs 32nm
      The 32nm 4 core intel CPU are 216 mm2

      Guesstimate is that AMD pays an average of ~50$ per 300mm2 chip

      • just brew it!
      • 7 years ago

      If they’re getting decent yields they should still be able to make money in spite of the larger die.

      • flip-mode
      • 7 years ago

      Honestly, there’s probably not a single person here that’s got enough /real/ info on the inner workings of things to be able to say anything meaningful. Sure, we can all talk out our asses as we so often do. We all know bigger die means more cost to produce, but that’s about all we know. We don’t know actual direct or indirect costs. We don’t know the specifics of the contract with GloFo. All we can do is look at die size and point and gasp. I don’t see the point of the exercise.

    • chuckula
    • 7 years ago

    Great Review! As a follow-up, would it be possible to OC an original Bulldozer up to the clockspeed of the 8350 to see how much of the performance improvement is from the increased clocks compared to fixes in the cores?

    EDIT: Judging by the downthumbs it looks like some of the AMD fanboys would rather not see a clock-to-clock comparison. Funny how they clamored for one between Sandy Bridge and Ivy Bridge. If the IPC increase for Piledriver really isn’t any better than the increase between Sandy & Ivy, it might require them to live in the real world.

    • Meadows
    • 7 years ago

    This might be the first AMD processor in two years that I would consider buying.

      • derFunkenstein
      • 7 years ago

      Yeah that was a pretty nice improvement overall. Some extra clock frequency and some (expected) architecture enhancements went together nicely.

    • mnecaise
    • 7 years ago

    For what it’s worth, you must have got an overclocking dud. A quick search and I found other reviewers are reaching 5.1 or 5.2 GHz on water using BIOS to set the parameters.

    • flip-mode
    • 7 years ago

    So the FX 8350 is officially and unambiguously AMD’s fastest processor ever. Good golly it’s about time! I could accept this chip as my own as far as performance is concerned, but then the power consumption is unacceptable. Now the wait begins for Haswell and Steamroller.

      • jihadjoe
      • 7 years ago

      +1

      Every time it came out on top of the PII-980 and 1100T i was thinking “finally!’
      But at what expense! 4.2GHz and 8 cores to beat their old architecture with fewer cores at 3.6-3.8GHz.

      Edit: spellchecker as -> was. Way too excited while typing this, I guess

        • flip-mode
        • 7 years ago

        Yep, you won’t hear me claiming it’s a pretty sight.

        • sschaem
        • 7 years ago

        You need to look at it for what it is:

        3.3ghz vs 4ghz., a 20% higher clock
        6 core vs 4 module

        The Module shift the compute power from FPU/SIMD to general compute (goal was more branching throughput).

        x264 (33 vs 44) normalize per ghz 1 core/module : 1.66 vs 2.75
        7zip (16.8 vs 22.7) : 0.848 vs 1.41
        cinebench(5.89 vs 7.94) : .29 vs .49

        Take into account transistor count (without L2/L3 cache)
        From ‘the web’ Its 60m Thuban core vs 85m Piledriver module

        Now we can estimate the compute gain per transistor:

        x264 16%
        7zip 16%
        cinebench 18%

        So we can see that piledriver get about 15% more compute efficiency per transistor at the same clock rate then Thuban.

        Well, at least its progress. But was all the money spent worth 15% ?

        • ermo
        • 7 years ago

        Fun fact: The PhII 965 BE retails at $99.99 at the egg right now. It can easily be OCed to 980+ BE clock speeds at stock voltage (3.8 GHz should be a breeze and 4GHz should be possible).

        If you look at the value plot in Scott’s article and move the 980 BE left until it sits at $100 instead of at $160 (it’s actually $139.99 at the egg, although it’s out of stock), it turns into a very interesting deal.

        The way I see it, the 965 BE is the best AMD gaming CPU for the money at this time — especially if you don’t need AES support.

      • Airmantharp
      • 7 years ago

      I just wish AMD could make up for the lost years. Intel decided to just give them everything they needed to hang themselves, then left them in the dust- and AMD is still stuck there.

      Netburst had promise and might have remained competitive if it hadn’t become a pariah with Prescott (and Willamette wasn’t such a disgrace), and Bulldozer has the same potential; AMD is right on the money with a forecast of reduced FPU loads and a need to focus on high integer (i.e. branching) performance.

      • sschaem
      • 7 years ago

      LGA2011 platform iddle at 55w
      180w during x264 pass2

      FX-8350 67w < is an extra 12 watt really unacceptable ?
      196w during x264 pass2 < 16 watt extra is really a killer ?

      The power number seem in line with a workstation platform with 8 core running >= 4GHZ.

      The issue is more task energy. Its not unacceptable, unless you think a i7-3820 is also an unacceptable platform ? But its not state of the art, thats for sure, even for a 32nm design.

      The Fx-8350 is again an overclocked part, I wish someone would do a power scaling with the fx-8350. Using the lowest voltage for 3.2, 3.4, 3.6 and 4ghz operations.
      My intuition tells me 3.4ghz might be piledriver sweat spot to reduce task energy withing 32nm SB.

        • flip-mode
        • 7 years ago

        True – idle power is ‘almost’ decent, though still almost 50% higher than Sandy / Ivy. But load power consumption is certainly too high for my tastes. It consumes TWICE the power of the substantially faster i7-3770 at load, and 1.5X as much at idle.

        Some people may not bat an eye at that, but it’s unacceptable to me.

        Edit: I’ve noticed that in your comments to this article you have repeatedly selected specific tests or configurations that show the Intel system in the worst light. Can we just skip all that nonsense and try to have honest dicussion?

          • halbhh2
          • 7 years ago

          Unreliable!…5 sites, 5 different 8350 vs 3770 idle power deltas!

          xbit: 0 watts
          Anand: 14.7 watts
          TechReport: 22 watts (thanks flipmode for correction)
          TechPowerUp: -9 watts (that’s right, 8350 system lower power draw)
          HotHardware: 18 watts

          That’s the difference in idle power draw, which ought to eliminate both the video card and power supply as variables. You are left with only the motherboard as the variable, ideally.

          ..
          Now that I’ve looked over the motherboards for the 3 sites, my conclusion is that idle power differences and peak power differences from any one site are *not reliable*.

          Unreliable.

          Instead, we’d be better off to *average* the power differences from 5 or 6 sites, to get an average value.

            • just brew it!
            • 7 years ago

            Well, since it is total system draw it is impossible to completely eliminate the motherboard, PSU, etc. as potential sources of variability. 18 watt variation does seem like a bit much though.

            • flip-mode
            • 7 years ago

            67 – 45 = 22.

            • Rza79
            • 7 years ago

            Maybe because Xbit uses a 1200W PSU …

            • Farting Bob
            • 7 years ago

            Why would that make AMD idle power consumption better relative to Intel?

            • Rza79
            • 7 years ago

            It wouldn’t make AMD’s better but make Intel’s worse.

            • halbhh2
            • 7 years ago

            Perhaps the Asus Crosshair V Formula is a power hog vs. the MSI Z77A-GD65.

            For differences this big, I think it has to be partly from the different motherboards.

            No matter what the blend of reasons for the differences between 0 watts and 22 watts, it is a huge difference that suggests people need to pay more attention to their components.

            Perhaps the PSU used here at TR for the AMD rig isn’t working up to snuff, just that one psu. Don’t know.

            • Spunjji
            • 7 years ago

            Averaging would be good, were it that too many of these sites use exceptionally basic methods for measuring power consumption. The best of the lot for methodology is TechPowerUp, but even there, we don’t have enough data points to establish how reliable their results are in practice.

            I shall be waiting for SPCR on this front, they tend to give figures that are scientifically sound even if their method of loading the CPU for load figures is a bit simplistic.

            In the end, I’d put money on the idle power difference between AMD and Intel being less than the variation between different motherboards on each platform. As for load… well, we know that’s not good.

        • My Johnson
        • 7 years ago

        [url=http://www.tomshardware.com/reviews/a10-5800k-trinity-efficiency,3315.html<]Tomshardware[/url<] did

      • Bensam123
      • 7 years ago

      I can see steamroller basically doubling the cores in certain situations definitely offering huge performance gains. PD is still crippled as far as core count goes.

    • CaptTomato
    • 7 years ago

    3770k is $340aud, so if this was 170-185 it wouldn’t be so bad, but if you’re a gamer looking for every frame, it might be dicey.
    Why is it that AMD can’t compete that well in CPU, but in GPU are neck and neck….is it just that CPU requires tonnes more RD dollars?

      • Bensam123
      • 7 years ago

      BD was a completely different change from the last architecture. AMD graphics isn’t the same company that does CPUs too… Different teams (pretty sure completely different locations too).

      • ronch
      • 7 years ago

      Both are exceedingly difficult engineering challenges but yes, I imagine it’s harder competing with Intel in the CPU market considering Intel’s vast resources.

        • CaptTomato
        • 7 years ago

        The annoying thing is that if Intel get lazy and AMD catch up, it’s still not great for consumers other than slightly better prices, but overall CPU power is related to the worst CPU design.

          • ronch
          • 7 years ago

          Actually, if I remember correctly, back in the Athlon 64 days AMD was selling its chips at very high (for AMD, at least) prices, and Intel was also doing the same. It was only when Core 2 came out that I found buying an Athlon X2 feasible. I bet AMD would be pricing FX in the $500-$1,000 range if FX turned out at 6.0GHz. As it is, at least consumers can benefit from a low-priced lineup because AMD wasn’t able to match Intel. Bad for AMD, but at least FX’s prices aren’t up in the stratosphere.

      • kalelovil
      • 7 years ago

      If AMD had access to Intel’s manufacturing facilities, just as Nvidia and ATi/AMD both use TSMC, it would likely be a much closer fight.

      The fact that Intel has long been the 800lb gorilla in the semiconductor industry, with revenue 10x that of AMD, certainly doesn’t help either.

      • just brew it!
      • 7 years ago

      Intel is the 800 lb gorilla of x86 CPUs and has the best process tech in the industry. It’s tough to compete with that. On the GPU front they’re on more or less even footing with nVidia.

        • CaptTomato
        • 7 years ago

        I guess that all makes sense…..too bad AMD can’t gain more market share.

        • ronch
        • 7 years ago

        Intel is on equal footing with Nvidia on GPUs? How? Perhaps in terms of market share but certainly not on performance.

          • just brew it!
          • 7 years ago

          I was thinking market share and resources. Two things that drive their ability to compete. But they’re reasonably competitive on the performance front too.

      • A_Pickle
      • 7 years ago

      I think that this architecture is just inherently weak for gaming. It’s a parallel monster, which makes sense — but the weak single-thread performance of this architecture illustrates how very multithreaded games [i<]aren't[/i<].

    • buzbal
    • 7 years ago

    hi,

    there seems to be some wrong information regarding the hyperthreading functionality of the cpus.
    the i5s all have no ht…this is wrong in the charts and in a sentence saying that the pentium is the only cpu without it…

    cheers

      • mikepers
      • 7 years ago

      Buzbal is correct the chart is wrong, with one minor clarification.

      None of the quad core i5’s (any generation) have hyper-threading.

      It’s only the dual core i5’s that have HT. And in the latest generations that’s only one part in each generation.

      You can check here:

      [url<]http://ark.intel.com/#DesktopProcessors[/url<]

        • Damage
        • 7 years ago

        Hmm. To which chart (table?) are you referring? Pretty sure I didn’t credit the Core i5-2400/2500/3470/3570K with HT, but if I did, I can’t find where.

        Oh, and the Core i5-655K (tested here) has Hyper-Threading. You can look it up in ARK. 😉

          • mikepers
          • 7 years ago

          I was referring to the table on page one where you list the code names and other CPU specifications. It’s a pretty minor thing however. It’s not that the table is wrong. I think it’s just not clear that the only i5’s with HT in all the generations are the dual cores. The i5 quads don’t have it. So no i5’s can do more than 4 threads.

          Great write up though. I’m with you in that I will stick with Intel until the power/heat/noise of the AMD chips gets to be a little more competitive.

            • buzbal
            • 7 years ago

            it’s on the first page…
            the chart lists sandy and ivy bridge cpus all having 4 cores and 8 threads and also mentions i5s and i7s in the same row…

            also on page 7 at the end is the sentence i meant regarding the pentium thing…
            though, i’m not sure anymore why i had the impression you said it was the only intel cpu without ht…
            did you change it for clarification since then?

    • sircharles32
    • 7 years ago

    Good, it’s finally faster than both Deneb and Thuban, in a vast majority of tasks.
    Now, just to wait for motherboard manufacturers to release updated BIOSes.
    The price is right. The performance is acceptable. It utilizes less electricity to complete a given task, than Bulldozer. Now, I just need a motherboard.

      • anotherengineer
      • 7 years ago

      Yes it is, however the clock speed is also over 1000Mhz faster as well, however it is a step in the right direction.

      If they can fix/improve the other remaining bottlenecks within the chip, it’s going to be really good chip.

        • sschaem
        • 7 years ago

        Yep, you really see the potential in Cinebench, BF3, x264, 7zip, etc…

        My prediction is that we will see less single threaded games and apps as time goes on, not more.
        So the FX will slowly have a more even playing field.

        The concern I have is now that AMD seem to have decent products (Z-60, A10-5800k, FX-8350, HD7xxx) will it survive long enough for another product refresh.

          • deathBOB
          • 7 years ago

          We’ve been hearing that story for years. BF3 is the exception, not the rule.

    • Unknown-Error
    • 7 years ago

    Thanks for the great review your majesty [b<]Scott Wasson[/b<] :p Look how application-dependent the FX-8350 is! On occasions, when the app is heavily-threaded, it can be faster than a i7-3770K and probably competitive with the LGA-2011 CPUs but jump to single-threaded and goes below i3-32x0 series. One thing though, if you are a gamer or power conscious or both, the FX-x3x0 series is not for you. Otherwise for 200 bucks, I think FX-8350 is a pretty decent bargain.

      • ronch
      • 7 years ago

      I think you missed 2011.

        • Unknown-Error
        • 7 years ago

        The FX-8150 was not a good bargin at all, especially compared to the 2500K. Almost pointless mentioning the FX-8150. In-contrast the FX-8350 price/performance is certainly better.

          • travbrad
          • 7 years ago

          The price/performance competitiveness is also helped by the fact that Intel’s CPUs haven’t really dropped in price at all in that time. Really the best time to buy a CPU was a year or 2 ago. I picked up a 2500K for $160 (after tax) over a year ago, and that is still a better price/performance than anything that is currently available.

      • A_Pickle
      • 7 years ago

      Well, Intel’s approach is to have a few strong cores, assisted with some logical side-threads. AMD’s approach is to have each thread equally meaningful. Novel, but [i<]man[/i<] do they get their butt kicked in games as a result of that...

        • abw
        • 7 years ago

        Assisted , yes , even more than one can imagine….

        By Intel s ubiquitous compiler and its CPU dispatcher…..

          • chuckula
          • 7 years ago

          Well abw, since AMDzone must have crashed and you are a refugee, why don’t you tell us about how the geniuses at AMD will write their own compilers then? How often do you accuse AMD of cheating on those “OpenCL” benchmarks where AMD’s own engineers show up and rewrite software specifically to run on only AMD hardware then go around bragging about how OpenCL is the future. Where were you then calling out AMD for not “being fair”? Oh I know, you were over at AMDzone spiking the Koolaid.

            • derFunkenstein
            • 7 years ago

            wow you sure pissed off somebody. +1

            • abw
            • 7 years ago

            Truth hurt.!!…

            • abw
            • 7 years ago

            Eating pills by the hundreds before posting….??..

            FYI i m registred at AT and HFR , i dont even know this Amdzone but when
            i do posts like this at AT there s often the usual meds addict that come spreading
            his pathological nonsense.

            Btw , AMD s compiled OpenCl work perfectly on Intel CPUs , way better than Intel s
            own compiled OpenCL , you just dont know what you are talking about.

            Now you can return in your cave.

            • derFunkenstein
            • 7 years ago

            OK seriously, this is just personal attacks. Nothing at all worth talking about.

        • sschaem
        • 7 years ago

        But kicked ? BF3 82 fps vs 84 fps and Crysis 2 only 86 vs 88 fps… Yea AMd is total pwned!

          • Meadows
          • 7 years ago

          You managed to cherry-pick the ONLY TWO games that were GPU-bound. Congratulations on your failure to be relevant.

            • Airmantharp
            • 7 years ago

            That’s GPU-bound and CPU-dumb; they’re overly GPU bound, and should make any competent CPU look decent. Switch BF3 to a populated 64-man, and watch the AMD CPU-based systems throw high-latency frames.

    • kristi_metal
    • 7 years ago

    Glad to see AMD still got some kicks left, in multithread applications runs nice, but they still got to work on that single-core efficiency.
    The performance in games is lesser than SB because most games don’t use more than 2 cores.

    • Jigar
    • 7 years ago

    Finally something from AMD’s CPU department that is not embarrassing.

    • clone
    • 7 years ago

    glad to see AMD finally surpassed their previous generation chips….. that was getting pretty painful to read.

    I may buy one of the lower ones to replace my triple depending on price.

    • My Johnson
    • 7 years ago

    Will you be reviewing the FX-4300?

      • clone
      • 7 years ago

      that would be the one I’m more curious about as well.

      • ronch
      • 7 years ago

      Check out Anandtech’s review on Vishera. They’ve included the FX-4300 in the charts.

        • clone
        • 7 years ago

        thx for that.

    • ronch
    • 7 years ago

    This is such a far cry from last year when the long-awaited Bulldozer architecture came out. It was slow, a power hog, and expensive (yeah, like you didn’t know that). AMD got all three bullet points wrong and we all had sleepless nights thinking about it. With this new lineup however, it looks like AMD has learned (a lot!) and has at least corrected one bullet point: Price. Between the Core i5-3470 (also my target if I were buying Intel even before TR christened it) and the FX-8350, it’s gonna be a tough call. For the money, the i5 is obviously the choice for trigger-happy gerbils out there who don’t give a crap about eight cores, but hey, eight cores is eight cores! Choosing between better gaming and the allure of having eight cores is tough, you know. Very tough. But hey, even if you choose the FX-8350 and play games with it you could always tell yourself, “JUST WAIT TILL GAMES UNLEASH ALL MY PILEDRIVER CORES!!!”

    Performance is also up to the level where I won’t be too embarrassed putting my FX-8350 PC (if I had one) next to a Core i5 or i7 Ivy. Not that it’s gonna beat the hell out of Intel’s finest, but at least Intel’s awesomeness will find it harder to melt that FX chip sitting right next to it. It’s pretty good in my book. Oh, and did I mention I’m still using an IBM PC/XT?

    Power? Well, power… let’s just say, if I had three eyes, the two of them would be bright with glee because of Vishera’s price and performance, but my third eye, which has been staring hard at those power graphs, will probably shed a small tear if you look close enough. You see, there’s practically no difference between power draw, idle or load, between the FX-8150 and this new baby here! And I don’t like that. In my frickin’ country (don’t ask where!), it’s like they’re using gold and diamonds to fire up our turbines, so that works out to about $5 extra every month for me for running the FX for 4 hours on load and 4 hours on idle every day. Not that I’d go bankrupt over $5, but money is money.

    In the end, I find this new lineup very compelling, particularly the FX-8350. Should I finally bite or do I wait for Steamroller? I’m sure it’s not just me asking this question.

      • tfp
      • 7 years ago

      [quote<] ok, if I had three eyes, the two of them would be bright with glee, but my third eye, which has been looking at those power graphs, will probably shed a small tear if you look close enough.[/quote<] [url=http://en.wikipedia.org/wiki/Triops<]Triops[/url<] Rabbits have two eyes And whales have two eyes And eagles have two eyes But triops has three eyes Triops has three eyes Two eyes on a face Are usually enough But triops has got One that looks up And one that looks around And one to keep an eye On the other pair of guys Triops has three eyes

        • ronch
        • 7 years ago

        How fascinating. That’s enough to get my mind off Vishera for like 10 minutes LOL

        • sircharles32
        • 7 years ago

        Is that Shel Silversein? Sounds like something he would have done.

          • derFunkenstein
          • 7 years ago

          They Might Be Giants.

          edit: video: [url<]http://www.youtube.com/watch?v=tLXdMeeZeGo[/url<]

            • tfp
            • 7 years ago

            BTW,

            Tripods have three legs
            Tripods have three legs

      • flip-mode
      • 7 years ago

      I think ‘compelling’ is definitely too strong a word. Performance is better, but we are talking a very small performance improvement and still far short, per core, of where it needs to be. Power consumption is still quite crummy and is bad enough to be deal breaker in my opinion.

      Price is definitely decent. But….

      But unless you’re massively mult-threaded, an i5-3450 is going to be a much better choice – in my opinion.

        • ronch
        • 7 years ago

        Yeah. I did acknowledge that choosing between better gaming performance and having eight cores can be tough. I, however, don’t game that much anymore and I read a comment on Extremetech’s take on Vishera about how this guy builds i7 and FX systems. The i7 is quicker in a straight line, but pile on the tasks and the i7’s lag noticeably. The FX is less quick in a straight line but runs more smoothly (‘flawlessly’, he says) when you multitask. Now I really can’t say how objective or repeatable his experience was but given my use case scenarios it’s a bit difficult deciding between the i5-3470 and FX-8350, since both chips seem to offer its own advantages. Here’s the link, just for reference (comment by Philip Samuelson):

        [url<]http://www.extremetech.com/computing/138394-amds-fx-8350-analyzed-does-piledriver-deliver-where-bulldozer-fell-short[/url<] I have to admit power consumption can be the deciding factor though. I'd feel bad about 100W going down the drain everytime I fire up an app that'll tax the FX.

        • Airmantharp
        • 7 years ago

        And by massively multi-threaded, you mean massively multi-threaded all the time- because yeah, the i5 is better in every circumstance, and an overclocked 2500k (or 3570k) is still a better deal for gaming while also being faster on the desktop AND using less power.

        +1.

          • ermo
          • 7 years ago

          Just curious, but if you can get a 2nd-hand 2600K for the price of a new 3570K, which is the better deal?

          FWIW, I went with the lightly used 2nd-hand 2600K in my build, due to the extra threads being useful once I move it ‘down’ the rung to Linux duty, where compiling will benefit massively from SMT.

            • Airmantharp
            • 7 years ago

            For gaming, they’re about the same- there isn’t a single feature that really matters today. Anything that could matter won’t be used by the time the CPU is replaced, such as PCIe 3.0 and extra instructions (assuming a build with only one or two GPUs). You’d be getting a Z77 board either way, which would give you Intel’s USB 3.0, allowing faster and more stable transfers along with bootable drives, and that’s about all Z77 has over Z68. You might consider the difference between HD3000 and HD4000 a contender, but you’d be really splitting hairs- I use my HD3000 to power a pair of secondary monitors, and that’s it.

            Hyper-threading isn’t a win for gaming (yet), but as you’ve stated it’s quite useful in many other situations; since Sandy overclocks better and cooler than Ivy, yielding essentially the same basic overclocked performance on average, I’d say you made the right call.

      • halbhh2
      • 7 years ago

      $5? Is that $0.40/KwH?? That’s an extremely high utility rate for electricity. We have a more typical rate here where I live. For parameters, I’m using 90 watts extra power at load, and 15 watts more at idle. (Note that idle power differences vary by web site! Is the Crosshair V especially power hungry?)

      At a U.S. typical 16 cents/KwH I get about $2/month for your 4 hours 100%, 4 hours idle each day. This yields an extra 0.4 KwH per day. (note that I use less than 2 significant digits, due to all the variables; 2.07 for instance would just be 2).

      My own very realistic usage scenario is to do chess analysis with Deep Fritz about 1-5 hours/week. Xbit put the 8350 performance in Deep Fritz very close to the i7 3770 (7% is nothing in chess analysis — it’s geometric as you add ply levels).

      For me, the extra power cost if I use the 8350 vs a i7 3770 is in the range of $4-$10 per *year*.

    • Sam125
    • 7 years ago

    I must say, I really am kind of impressed with Vishera. The power usage is kind of high so a shrink to 22nm ASAP would go a long way to make PD even more enticing.

      • ronch
      • 7 years ago

      [quote<]so a shrink to 22nm ASAP would go a long way[/quote<] Hold on, I'm calling Globalfoundries...

    • My Johnson
    • 7 years ago

    The catalyst drivers seem a little old? But asking that question is like asking why you didn’t use a Nvidia GPU. :/

      • derFunkenstein
      • 7 years ago

      Unless you’re accusing AMD of optimizing for its CPUs better/at all, as opposed to Intel, then it doesn’t matter as long as everything else is equal.

    • just brew it!
    • 7 years ago

    OK… so not a home run by any means, but a solid double in a do-or-die situation. They’re still in this game, but barely.

    The take-away for me: Price/performance at stock looks very good (games excepted); power consumption is a little on the high side (but not outrageously so); and overclocking headroom (based on a sample size of one) is mediocre.

    Given that I’m not much of a gamer or overclocker any more, this is probably enough to keep me in the AMD camp for at least one more iteration. I’m gonna wait for the price to drop a bit though, as Newegg seems to be charging a premium on launch.

    Or… hmm… if this drives the price of the FX-8150 down to say, $135, that could be an option as well.

    @Krogoth (#16) – Yup. Support for advanced tech like ECC and full hardware virtualization (including IOMMU) in their consumer-oriented parts is one of the reasons I continue to use AMD CPUs (and Asus motberboards) almost exclusively for my own builds. But I guess I’m not a “typical” PC enthusiast…

    • Arclight
    • 7 years ago

    A little better than i expected frankly, but i still cringed every time the 980 or the 2120 (Ivy) was above it in any metric. Power consumption is still high, overclocking is really underwhelming, but for die hard AMD fans it does seem like a better proposition than the 8150.

    Imo they should have gone balls out and clocked the base frequency at 4,4- 4,5 GHz and sell it as a 140W part. It’s not like people buying this will be concious or mind the peak power consumption anyway.

    Also, given how the BF3 code runs so well on all chips i’m inclined to believe that the reason why the frame time are so high in other games it’s because the game developers neglected the AMD chips and favoured the Intel ones.

    Someone care to comment on this?

      • I.S.T.
      • 7 years ago

      Or possibly their code is simply more friendly to Intel’s no matter what they do, or it’s possibly more stressful per thread.

      As for PD itself, nice that AMD is in the process of surpassing their old CPUs. I’m sure there’s a fair bit of stuff not shown here that works better on the X4 980s, but this closes the gap quite a bit. Next CPU rev by AMD should make it universally faster.

        • Arclight
        • 7 years ago

        [quote=”I.S.T.”<]As for PD itself, nice that AMD is in the process of surpassing their old CPUs. I'm sure there's a fair bit of stuff not shown here that works better on the X4 980s, but this closes the gap quite a bit. Next CPU rev by AMD should make it universally faster.[/quote<] Agreed, let's just hope they don't screw up for once and they manage to impress us for the first time in years.

    • Krogoth
    • 7 years ago

    This guy is a workstation-class and server-grade part on the cheap.

    You can get ECC support for a fraction of the cost of an Intel equivalent. On the intel side, you have to get Xeon-grade silicon on top of getting more pricy workstation platforms to get ECC support. IIRC, BD architecture is pretty solid with VMs if that is your thing.

    Gaming doesn’t matter that much anymore on the CPU front. It has been this way since Nehalem’s debut. i3 chips based on SB/IB yield practically the same gaming performance as their more pricey i5-i7 brethren, yet they still outrun AMD’s best effort.

      • Arclight
      • 7 years ago

      [quote=”Krogoth”<]Gaming doesn't matter that much anymore on the CPU front. It has been this way since Nehalem's depute. i3 chips based on SB/IB yield practically the same gaming performance as their more pricey i5-i7 brethren, yet they still outrun AMD's best effort.[/quote<] But, but those frame times tell a different story. CPU absolutely does matter for gaming (except for BF3 due to some unknown reason).

        • Krogoth
        • 7 years ago

        Check the numbers again.

        CPU performance on the gaming front-end hasn’t seen any significant gains since the depute of Nehalem. You don’t need a $1,000 CPU, let alone a $299 CPU to get sufficient CPU gaming performance.

          • Arag0n
          • 7 years ago

          Agree, to me looks more like single thread performance problem. As soon as a game multi-threads properly, every CPU performs almost the same, and those that do not thread properly let higher single thread CPU’s have some advantage, but that advantage usually is not significant. There is totally NO CPU in the tests with less than 50fps at any game, including last generation Llano!

            • Arclight
            • 7 years ago

            Sigh, i guess you only check average fps then.

            • Arag0n
            • 7 years ago

            No, I don’t. but even in the no average measures the highest time any CPU stays in a “low frame rate” is at most, 2s in a run of 2 minutes game-play. I guess that’s noticeable but actually it is insignificant….

            • MadManOriginal
            • 7 years ago

            When ‘everything’ has good average framerates though, the hitches due to long frame times are exactly what you’ll notice.

            • Arag0n
            • 7 years ago

            I agree with your statement, but I think those hitches as shown by the review are almost non existent in almost every game. And remember, those games are almost the most CPU intensive games made ever. Try to play whatever else and the difference will be totally non existent. That’s why I try to take to your attention that the longest time a processor is beyond the point you really notice a huge drop is 2s in 2 minutes. Yeah, you will notice it, would you pay 100$ to not see it? I don’t think so.

            • grantmeaname
            • 7 years ago

            Did you catch the part where the first meaningful metric they could use from their “time beyond” statistic is 16.7ms, corresponding to 60FPS? That’s buttery smooth! In their gpu reviews they use 50ms=20FPS.

            • sweatshopking
            • 7 years ago

            what does “buttery smooth” mean? that term is often thrown around, but i don’t usually think of butter as being ‘smooth’. i think of it as being more greasy. Maybe it’s just me, but isn’t there something that makes more sense?

            • anotherengineer
            • 7 years ago

            Indeed.

            TR should start doing a SOURCE benchmark.

            I used to play CSS and I remember having a single core vs a quad core did nothing, that is until valve refined their software with the multi-core rendering option.

            Enabling this would bring up my fps in source from 150 to the ceiling at 300, and four core would run at an average instead of 1 core running at 100%

            I think this would be a good metric for showing the differences attainable with single and multi-threaded games.

            So how about it TR, put a Source engine benchmark back in with single and multi core options??

          • Arclight
          • 7 years ago

          But when were you talking about a $1k CPU vs a $300 CPU? We all know that the 1K one is overpriced and for gaming, not worth it……The clear difference is between AMD and Intel chips located in the same price category and between different models of the same architecture, at least that’s what i understood from the frame times (again, with the exception of BF3. Why can’t more game be like it?).

        • juampa_valve_rde
        • 7 years ago

        No unknown reasons. Looks like Frostbite engine is nicely threaded.

          • Arclight
          • 7 years ago

          Saying it’s nicely threaded doesn’t solve the mystery for me. Ofc it’s highly threaded but why are we seing only academic differences between a lowly A8 3850 and a i7 3770K? To me it doesn’t make sense…….you must agree that those chips are in no way shape or form equal or close in terms of performance, even if the program is very efficient it should show higher gaps between different performing chips.

          • jonjonjon
          • 7 years ago

          frostbite was actually designed for the pc and isnt just console ports like a lot of games. thats why battlefield 3 sucks on xbox. here is an interesting article by ms about how to take advantage of multithreading on xbox and windows.

          [url<]http://msdn.microsoft.com/en-us/library/windows/desktop/ee416321%28v=vs.85%29.aspx[/url<]

        • Airmantharp
        • 7 years ago

        BF3 Single-player is CPU dumb; I understand why they included it, but it’s hardly relevant to a CPU discussion.

        It’s rough to do, but the MP side tells a different story. That game will keep a high-end GPU (~HD7850/GTX660+) busy along with four of Intel’s cores running at 4.5GHz+. AMD still hasn’t made a part that’s recommendable for it; they’re too slow, too hot, and too expensive for what they deliver.

          • Arclight
          • 7 years ago

          I think i feagured it out. The only time in the past when CPUs were equal was when the game was GPU bound. Since BF3 is multithreaded, GPUs have yet to reach that limit where they need a faster CPU in order to work at their full pottential. Thus, with the current gen, the stark differences in CPU performance for BF3 should be apparent while using multi-GPU configs with high end cards. Lo and behold:

          [url<]http://vr-zone.com/articles/amd-fx-8350-vs-intel-core-i7-3770k--4.8ghz--multi-gpu-gaming-performance/17494.html[/url<]

      • Meadows
      • 7 years ago

      How many corrections of “depute” do you require to get the word “debut” into your head?

        • ronch
        • 7 years ago

        Uh oh. Longest discussion (or argument) thread on this article coming right up…

      • flip-mode
      • 7 years ago

      I’m not sure how many desktop users will care. Some very small percentage will, I’m sure. Most people will say, “That doesn’t mean a damn thing to me.”

        • just brew it!
        • 7 years ago

        Quite true. But I (for one) value stability quite highly, and do tend to favor systems that support ECC.

          • flip-mode
          • 7 years ago

          Yep, you immediately came to mind. I think you’re a rare dude, JBI.

            • derFunkenstein
            • 7 years ago

            I do, too. I doubt there are that many people going with AMD for this specific reason, though I suppose the good will created by doing so might have a halo effect to a degree, among the technology-savvy.

            • flip-mode
            • 7 years ago

            I do wish the same was available for Sandy / Ivy and that Intel didn’t do feature castration on things like virtualization support.

            • derFunkenstein
            • 7 years ago

            Well sure, it would be awesome, but that would cut into their high-priced Xeon market. 😉

            • ermo
            • 7 years ago

            The Core i7 equivalent Xeons are hardly ‘high-priced’ though — they are only a few dollars more than the K model (ark.intel.com) EDIT: Not to mention the fact that the E3-1245v2 is probably the best buy of the bunch:

            Core i7 3770K (3.5/3.9) = $342 BOX
            Core i7 3770 (3.5/3.9) = $305 BOX
            Xeon E3-1275v2 (3.5/3.9) = $350 BOX
            Xeon E3-1245v2 (3.4/3.8) = $273 BOX

            I don’t know if they actually burn fuses on the dies or if they just handle it with microcode or whatever, but intel’s segmentation strategy drives me up the wall — I wish I could get a Core i7 3770K with ECC and VT-d support and DDR3-1600 ECC RAM in a P8Z77V Deluxe board. End of story.

            Barring that, at least allow me to put a Xeon E3-1275v2 into my P8Z77V Deluxe board and let me use the ECC functionality and TurboV feature to clock all cores to 3.9GHz when I decide I need the extra juice to play a game for instance.

            The FX-8350 allows JBI and I to both get an unlocked multiplier [i<]and[/i<] to use ECC RAM, which is why we're having a hard time swallowing intel's practices.

            • flip-mode
            • 7 years ago

            Good info; and well said.

            • derFunkenstein
            • 7 years ago

            That last bit is my point. To get fully unlocked silicon, you have to pay through the nose.

          • Airmantharp
          • 7 years ago

          I have to wonder, never having had a system with ECC, nor used one extensively for real work (gaming is real work, computing-wise!) that had ECC RAM.

          Outside of mission-critical systems, is there really a need for ECC? Not that I’d build a workstation (used for actual work) without it- just asking.

            • ermo
            • 7 years ago

            from [url<]http://lambda-diode.com/opinion/ecc-memory-2[/url<]: [quote<] "The conclusion is the same : you ABSOLUTELY need to use ECC memory if you intend to use your computer for any kind of remotely serious use. Testing your computer for a few weeks isn't even sufficient since errors may manifest themselves later in life, when you have much more data to lose. In fact, Microsoft recommends ECC for Vista ; in other words, ECC is no more for servers and aerospace. " [/quote<]

            • just brew it!
            • 7 years ago

            Heh. That probably goes a little overboard to the other extreme, but there’s some truth to it. Bottom line is, DRAMs have an inherent baseline error rate. Even if all of the DRAM chips in your system are perfect, some errors are unavoidable due to background levels of radiation. (And background radiation increases with altitude, so if you live in say, Denver, ECC is more important than if you live in Boston!)

            Most DRAM errors are benign. They either hit a memory cell that ultimately gets rewritten before the value gets used, or have an effect that is not noticeable like (say) a fleeting glitch in a game or video playback. If you get unlucky, the flipped bit might cause an application crash, BSOD, or (worst case) silently corrupt critical data or filesystem meta-data. This is why servers and workstations used in critical applications should ALWAYS use ECC — it’s not just a matter of uptime, it is insurance against important data silently getting changed behind your back.

            TBH a large part of my ECC fanaticism is probably the result of being one of the engineers who worked on a study of “soft” memory error rates back in the early 1990s. Back then, the rate we measured was on the order of 1 flipped bit per day per 32 GB of RAM. That’s a LOT of errors, especially given the amount of RAM in a typical system these days! To be fair, this is somewhat of an apples-to-oranges comparison since DRAM manufacturers have worked over the past couple of decades to get their soft error rates down. One of the breakthroughs came when the chip makers and DIMM vendors realized that they had to be very careful to control the levels of trace impurities in the chip packaging and solder; minuscule amounts of unwanted radioactive elements in close proximity to the DRAM dice turned out to be a significant contributor to error rates! But the fact remains: Flipped bits in DRAM happen, you just don’t notice most of the time; and even when you DO notice, you’re more likely to blame the symptom on a buggy application, flaky device driver, etc.

            So anyhow… I’m a firm believer in using ECC RAM. With the ultra-cheap DRAM we’ve had these past few years it doesn’t even need to cost an arm and a leg — Kingston ValueRAM unbuffered ECC DIMMs are quite affordable. And I’m thankful that AMD (and Asus on the motherboard side) are doing their part to keep platforms that support ECC (and other features that Intel apparently considers “server/workstation class only”) affordable.

            • Airmantharp
            • 7 years ago

            Thanks for the write-up!

            I guess I was mostly uncertain as to whether or not these errors ‘mattered,’ and I can see the appeal for workstation use. I don’t have anything that critical right now, but you’ve certainly opened my eyes; I’ll have to add ECC as a consideration in future builds.

      • Delphis
      • 7 years ago

      Not sure I would want this in a server that’s on 24/7, burning up much more power for not enough gain, IMHO. More power usage + more heat to get rid of :/

        • A_Pickle
        • 7 years ago

        I dunno. I was thinking that one of these chips would make for an excellent multi-purpose home server — you could probably host quite a few applications from a Vishera-based system and it’d perform well. Maybe. Minecraft Server sure won’t, though.

    • odizzido
    • 7 years ago

    I was expecting less from AMD so this is a nice surprise.

    I kept hearing 6%, but if you look at just for example skyrim’s 99%, it is actually 17% faster.

    and just another random one I chose, the panorama factory stitch, it is 19% faster.

    Despite idle/load power being pretty much exactly the same, that’s a pretty solid improvement.

      • dragosmp
      • 7 years ago

      It does look like they improved the most in BD’s weak spots branches and some FPU, as shown by those tests.

      I’m pretty happy with this launch, it is the first AMD CPU since the 1100T that is overall faster than its predecessor. It’s been a long time…

      • ermo
      • 7 years ago

      Don’t you have to factor in the fact that the base clock is higher by 4000/3600 = 10/9 = 1.111.. =~ 11% as well?

      (10/9) * 1.06 =~ 1.177.. EDIT: 1.06 is the reputed 6% increase in IPC.

      That jives with a 17% to 19% increase in performance, doesn’t it?

        • deruberhanyok
        • 7 years ago

        Yes, but only if you look at the clock speed as your common denominator.

        If, for instance, the TDP is the same, and the price is the same, then you’re looking at a ~17-19% increase in performance over the previous generation.

        While the IPC may not have increased by that much by itself, the overall effect – architectural improvements + higher clock speed + better power efficiency – is that of a 17-19% increase in some situations.

        That said, I too am curious about the performance increase at same clock speeds, as I expect the difference is very minor going from bulldozer to piledriver.

          • ronch
          • 7 years ago

          At the same clock Vishera is theoretically 6% faster than Zambezi. Ermo showed it in his equation.

    • StuG
    • 7 years ago

    What Bulldozer needed to be. However glad to see they are now moving in the right direction.

      • ronch
      • 7 years ago

      Vishera is to Bulldozer as Deneb/Shanghai was to Barcelona.

        • ermo
        • 7 years ago

        Except it is not a die-shrink and it doesn’t improve power consumption any, only IPC and therefore power efficiency? A more apt comparison would probably be Deneb -> Thuban, as Thuban was also more power efficient and brought with it a slight increase in IPC despite using the same 45nm node IIRC.

        Still, it’s decently priced progress and progress was what we needed to see. But I’m with StuG: If only Piledriver/Vishera had landed instead of the original Bulldozer/Zambezi last year…

    • Derfer
    • 7 years ago

    Well…. at least they finally beat the phenoms. That’s got to count for something right? right??

      • brucethemoose
      • 7 years ago

      Well, it’s about the same clock for clock… but at least it clocks higher.

        • Arclight
        • 7 years ago

        I get what you mean but the wording is wrong.
        “They perform about the same, but clock for clock Phenoms are still better.”

        That said, there are benches where the 8350 is clearly above and beyond the Phenoms. In others they fall behind due to low IPC.

    • Arag0n
    • 7 years ago

    Now, AMD needs a better manufacturing process to improve power consumption but the architecture is finally competitive. I’m sure that the FX-8350 manufactured by Intel would have similar power draw to Intel processors. Maybe the 14nm FinFet from TSMC can help them for the APU’s but I’m not sure what is it going to help them with the normal CPU’s.

      • brucethemoose
      • 7 years ago

      [quote<] the architecture is finally competitive [/quote<] In everyday consumer tasks, this "4 module" 8 thread chip at 5ghz just barley catches up to the 4 core, 4 thread i5 2500k at 3.7ghz Turbo in certain tasks, but uses ALOT more power and has a much larger die than that same chip released in January 2011. AMD technically has a process advantage here too. It's not exactly competitive, just a step in the right direction. This chip should've competed with Nehalem, but it's VERY late to the party.

        • Arclight
        • 7 years ago

        [s<]I'm not saying i disagree but note the chip can't possibly reach 5GHz since it's stable only at 4,5 GHz with a dangerous >1,5V.[/s<] Edit: at least with TR's review sample that is. Just saw the other reviews that were a bit more lucky in terms of OCing. I think a special article on OCing and undervolting is in order.

        • Arag0n
        • 7 years ago

        This chip is pretty similar performance wise to Sandy Bridge Core i5 2500k and Core i7 2600K. Don’t focus yourself to just the benchmarks for gaming because gaming is usually single threaded and will benefit architectures designed to be powerful single thread instead of wide, in front of architectures designed to be wide but weak single threaded. The point is, with a mixed bag of benchmarks with both, single and multi threaded scenarios alike, both architectures perform similar.

        The difference between Sandy Bridge and AMD power consumition, with AMD consuming around 20 to 30% more looks to me a fabrication process problem. AMD uses SOI and Intel uses High-K Gate for 32nm fabrication and other small differences. So I don’t think AMD has an architecture problem but a manufacturing process one, and honestly, I don’t see how they will be able to work around it any time soon. Intel seems to be far ahead and I don’t think global foundries with money both from AMD fabrication and other customers has enough cash to invest to close the gap. TSMC is more likely to catch up but still, the only chance is for 14nm FinFet.

        You can’t compare both architectures using IvyBridge 22nm as reference, the manufacturing process for IvyBridge is just too far ahead of AMD.

          • MadManOriginal
          • 7 years ago

          [quote<]You can't compare both architectures using IvyBridge 22nm as reference[/quote<] You can't? Funny, because I *can* purchase said 22nm Ivy Bridge CPUs.

            • wierdo
            • 7 years ago

            He’s obviously talking about academic comparisons between architectures, it’s not for making buying decisions.

            • Arag0n
            • 7 years ago

            You got the point, thank you. I’m trying to point that if Vishera was made by Intel and Core i5/i7 was made by AMD we would have a pretty similar picture of performance and power consumption, so architecture wise AMD is at parity now. What AMD needs now is manufacturing parity but I have no idea how they are going to achieve it…

            • brucethemoose
            • 7 years ago

            My point is that AMD’s 32nm process is superior to Intel’s back in 2011, and AMD’s process advantage over those chips don’t seem to help.

            • Arag0n
            • 7 years ago

            Come on! You must be kidding, Intel uses High-K process while AMD uses SOI…. High-K is a huge advantage at 32nm, it reduces leaking and allows lower voltages.

            • NeelyCam
            • 7 years ago

            GloFo 32nm SOI process is using High-K gate dielectric. The two processes (32nm bulk Intel, 32nm GloFo SOI) are roughly equivalent in terms of transistor performance.

            The big difference is that GloFo is using ‘gate-first’ methodology, while Intel is using ‘gate-last’ – ‘gate-last’ is said to provide better yields and lower variation, at the expense of area efficiency.

        • Airmantharp
        • 7 years ago

        This chip would’ve competed nicely with Nehalem.

    • brucethemoose
    • 7 years ago

    These chips like being under water. 2 of the reviews I’ve seen use big tower coolers, and don’t go near 5ghz, while the other 2 reviews that use closed loops readily pass the 5ghz barrier.

    Still, that $190 2500k at microcenter pretty much makes this chip irrelevant.

      • xeridea
      • 7 years ago

      How do you figure? The 2500K is better at gaming, but substantially slower at what this chip is made for, heavily threaded workloads. If you are only gaming, you would be better off with a lower core chip dollar for dollar.

        • Ringofett
        • 7 years ago

        I’d say “somewhat” slower at heavily threaded workloads, but sufficiently more energy efficient in virtually any scenario as to start saving you money each and every month, something that’ll be increasingly pronounced the more you actually do in fact use a system for heavily multithreaded tasks (or any heavy tasks). Some quick math, with my local power rates, suggest an extra ~$200 a year for 24-7 operation, factoring in cooling (it works against the AC). Performance delta aint that great, unless I take up a hobby of using 7-zip to compress… the internet.

        Unless electricity and cooling is free, I think anyone getting this is doing it to some extent to support AMD.. Thankfully though, performance is up enough that now at least it’s not *totally* nuts to do so, but I’d still be surprised to see it work its way in to any build guides at review sites.

          • FubbHead
          • 7 years ago

          They consume essentially the same energy when idle, which is where most desktop system spend maybe 90-95% of their time. And if the CPU draws some 40W more the remaining 10% will hardly show up on most peoples electricity bill me thinks.. But if you run 24/7, then sure, it will probably show up, like, 3 light bulbs instead of 2.

          But regardless, they really need to work on the IPC from now on, not increasing speed… It didn’t work well for the P4, why should it work now.

          • jensend
          • 7 years ago

          You’re nearly an order of magnitude off. Electricity is only an average of $0.12/kWh and end-user machines spend the vast majority of their time in idle states (even while being used for web browsing etc), so that’s under $24/year. A normal air conditioner has an efficiency ratio of around 4-5, and people don’t use AC 24/7 the entire year – more like 12 hours a day for a third of the year- so that’s under $1. In most climates you’ll gain most of that <$1 back through marginally reduced heating costs during the cool months.

          If you’re concerned about power costs there are much bigger fish to fry than processor choice. Allowing the system to suspend when it’s not in use for long periods will save an i5 user at least $40/year and will save an FX user at least $50/year. Display backlight type, blanking time, and brightness are important too.

          And of course each of these makes less of a difference than turning off one 60W incandescent bulb.

            • bcronce
            • 7 years ago

            Speaking about idle power draw, Haswell was demo’d consuming 1/20th the power draw of Ivy-Bridge. AMD has a lot to do in the next 9 months.

          • xeridea
          • 7 years ago

          The 8350 is about 50% faster in heavily threaded workloads, thats not “somewhat”. Gaming performance is still plenty good on any game, and the best out of any AMD CPU.

          As has been stated by others, most aren’t 100% load 100% of the time, idle is the majority, and A/C is negated by the power being essentially free in winter since you need to heat your house anyway. The 8350 is also $25 cheaper.

      • sschaem
      • 7 years ago

      The 2500k and the 8350 got about the same gaming performance.

      I woudln’t buy the 2500k to get an 2 extra average FPS in BF3.

      Also assuming game in the next 24 month will be single threaded friendly is very short sighted.

      So I think the 8350 is a much better deal long term than the 2500k, specially for gaming.

      And if AMD is still alive, ok big if, steamroller will be available for AM3+.
      From what I understand 1155 is EOL. So if you get the 2500k you will have to hunt Ebay for a use i7-3770k

        • brucethemoose
        • 7 years ago

        [quote<] So I think the 8350 is a much better deal long term than the 2500k, specially for gaming. [/quote<] How do you figure that?

        • travbrad
        • 7 years ago

        [quote<]Also assuming game in the next 24 month will be single threaded friendly is very short sighted.[/quote<] A game doesn't have to be single-threaded for Intel's CPUs to have an advantage. An i5 has 4 very fast cores to work with, so Piledriver only starts closing the performance gap in games that use MORE than 4 cores, or games that are GPU limited (like BF3). Most games still don't even use 4 cores, 6 years after the first desktop quad-core CPUs. We are even seeing lots of games that only use 2 cores still. Of course we'd all like games to use as many cores as possible, but expecting games to suddenly be using 5-8 cores in the next couple years is unrealistic if you look at recent trends.

        • Airmantharp
        • 7 years ago

        If you’re using single-player BF3 to compare CPU load, you are wrong.

    • tbone8ty
    • 7 years ago

    [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16819113284&Tpk=fx-8350[/url<] $219.99 gotta play to play first, hopefully these will settle to $195

      • ronch
      • 7 years ago

      Good to see they’re retaining use of the tin box. When I buy my FX I want my tin box!

    • tfp
    • 7 years ago

    That is a good amount of improvement overall from AMD, If they can fix the front end and feed 2 int cores and the FP unit at the same time (unlike now in which it’s only one at a time) they should have a good chip.

    • Waco
    • 7 years ago

    I hoped for a lot more. I guess BD wasn’t a failure because of a “small” bug somewhere in the chip but a larger overall problem…

    I can’t see why anyone would buy one.

      • shank15217
      • 7 years ago

      do you live in a cave?

      • Arag0n
      • 7 years ago

      The improvement is quite nice, and actually I think that AMD has better performance for gaming too, just not right now, most of games are still single threaded and even if multi-threaded they still use a main single thread and some helpers, but as games keep getting more complex them also multi-thread better, so in phew years games won’t have the single thread performance penalty IMHO.

      I think AMD did a good job proving that bulldozer can finally beat Phenom II. They need to iterate and improve the architecture, specially the memory bandwidth IMHO, but their main issue now remains on manufacturing process. I would like AMD to be back to be competitive sooner than later and Piledrive looks to bulldozer what Phenom II looked to Phenom, with the difference that AMD won’t change architecture for some iterations now and will go in the slowly improving way as Intel does since Nahalem, no heavy redesigns, just little improvements and low risk to flop between iterations as Bulldozer did compared to Phenom II.

        • Arclight
        • 7 years ago

        [quote<]I would like AMD to be back to be competitive sooner than later and Piledrive looks to bulldozer what Phenom II looked to Phenom,[/quote<] I wouldn't go as far, but it's a bit better than what the B3 revision was to the original Barcelona. Steam roller, hopefully, will bring that kind of improvement a la Deneb.

      • ronch
      • 7 years ago

      Come on, dude, give AMD a break. Vishera is a good step forward by a company that has a fraction of Intel’s resources. I give AMD engineers more kudos for pulling off something like Bulldozer than Intel’s engineers for having created SB/IB.

        • Waco
        • 7 years ago

        Give them a break? I wasn’t being overly harsh. I just don’t see why anyone would want one over the equivalent Intel chip at the same price point.

        AMD doesn’t have the resources to be so ambitious. Intel BARELY pulled it off with the P4 and they only survived because of said resources.

        Sure, this is a decent step forward, but its STILL slower than a Phenom II in terms of IPC. AMD could build a real octo-core CPU with the Phenom II cores and it’d be faster and more efficient for 99% of everything people (and most server loads) do.

        I really wanted BD to be a success. I really wanted PD to be a success. That doesn’t help much when a CPU from many years ago is STILL faster and more efficient.

        • just brew it!
        • 7 years ago

        While I agree that AMD’s engineers pulled off something pretty darned impressive by getting this far in spite of having fewer resources and an inferior manufacturing process compared to Intel, the CPU market isn’t the Special Olympics. At the end of the day the fact that they were handicapped coming into this fight doesn’t count for anything.

          • ronch
          • 7 years ago

          I agree. However, given the consequences of AMD’s failure, I’m willing to give them some leniency.

    • MadManOriginal
    • 7 years ago

    tl;dr version: Piledriver is great for multithreaded, worse for less threaded programs and gaming, and has poor power consumption. At least it’s priced appropriately.

    • tbone8ty
    • 7 years ago

    another great review from damage labs.

    FX-8350 surely has a nice plot point on your perf/dollar scatter plot.

    Overclocking power consumption has improved alot!

      • Rza79
      • 7 years ago

      Agree, great review.
      Indeed, only 66w more while overclocked. This seems to be much less than before but it needs a more thorough investigation.

      I have one comment: you’re still using Windows Movie Maker 14 and rendering to WMV. Maybe it’s time to upgrade to version 16 since now it’s not using WMV as default anymore. In a way that test has become obsolete.

      I also noticed something with the two Civilization V tests concerning the amount of gain a cpu makes from the no render mode.
      Ivy: 88% – Sandy: 91% – Lynnfield: 100%- 8350: 98% – 8150: 129%! (percentage gain)
      It seems that both manufacturers are improving their efficiency while rendering.
      The second thing you’ll notice is that all the Zambezi cores are stuck at 41 fps (which also explains the huge gain the 8150 makes from going to no render).
      Could it be that AMD screwed up something as simple as their Hypertransport interface to the northbridge on Zambezi? It would explain why Vishera is much improved in gaming. Could you run some PCIe bandwidth and latency benchmarks on both?

      • badnews
      • 7 years ago

      Best review I have read. Anand’s was very disappointing especially the pointless extrapolation/guess work comparison with Haswell.

      The performance of this CPU is very attractive, but it fails clearly in power consumption. I really think the conclusion shoul feature a power/performance scatter plot.

      This excludes it from the area it would be perfect for — the “typical” datacentre environment. I would love to rack some Piledriver servers but power is a primary cost. I wish AMD could make progress on the process technology front ASAP!

    • Crayon Shin Chan
    • 7 years ago

    Oh yes…. now let’s see if PD > BD! My AM3+ board is itching for an upgrade…

Pin It on Pinterest

Share This