Intel’s Core i7-4960X processor reviewed

Begin NSA intercept.
Time: 21:30 09-02-13.
Station: Medina/Lackland RSOC. San Antonio, TX.
Source feed: Time Warner Cable, MKC-CPE-65-26-res.rr.com.

Hey, Jamaal, check this out. This computer nerd guy is doing another review of a CPU chip. Been doing them for over a decade, every time they come out with a new one, like clockwork. This has gotta be, like, number 150 or so. I don’t know how he does it.

What’s that? Now, don’t get all prissy on me for looking in on civilians. Everybody snuck a peek at that Nebraska cheerleader, even you. Things changed when Hanley got the commendation from Biden for that one. Besides, this guy’s review will be completely public in a few hours. What’s the harm?

This is about the latest high-end chip from Intel. World’s fastest or something like that. He hasn’t put any text in it yet, but all the pictures and speed graphs are in there. These reviews are totally formula, kinda rote. I’ve kinda been following this stuff, thinking about upgrading my gaming rig. Betcha I can tell you what he’d say about it just from what’s in there now.

Code name Key

products

Cores/

modules

Threads Last-level

cache size

Process node

(Nanometers)

Estimated

transistors

(Millions)

Die

area

(mm²)

Gulftown Core i7-9xx 6 12 12 MB 32 1168 248
Sandy Bridge-E Core-i7-39xx 8 16 20 MB 32 2270 435
Ivy Bridge-E Core-i7-49xx 6 12 15 MB 22 1860 257
Vishera FX 4 8 8 MB 32 1200 315

Yeah, look, it’s Ivy Bridge-E. Basically a server chip with more cores and cache that’s been converted into an expensive desktop part. This one is a drop-in replacement for an older chip, Sandy Bridge-E—same socket, but built with a newer manufacturing method.

Hmm. Last time, they used an eight-core chip and disabled a couple of cores and some cache. This time, they’re showing a native six-core part with less cache. That’s weird. The chip size is way down, too. Even the transistor count. I wonder what Jenkins in IT thinks about that.

Dials extension.

Hey, Jenkins. This is Blanda up in observation. Quick question for your computer-geeky brain to answer for me. What do you think of a 22-nanometer Xeon chip that has fewer cores and less cache than the 32-nm one? Like six and 15MB versus eight and 20MB. Would they really do that?

Uh hum. Really. Lots cheaper? Hmm. Gotcha.

Seriously? Coming out soon? Haha, wow.

Ok, thanks, man. See you at the fantasy draft tomorrow night. Blandanna is going all the way this year!

Hangs up.

Ok, so get this. Jenkins says the guys down the hall from him have been testing a 22-nm Xeon chip—pre-release thing we can’t talk about—that has twice the everything inside of that Ivy Bridge-E. Man, some of those extreme builder dudes are gonna be pissed if they drop nearly a grand on one of those six-core chips and find out later about this other one. I guess Intel thinks people don’t need more than six cores in a desktop system. I see the logic, but wow. You’ve gotta think some of those guys would pay even more for the bragging rights.

Pretty sure the 4770K’s TDP is actually 84W, not 95W. Source: Intel.

Speaking of that, check out these prices. The psychology is almost as fascinating as whatever was happening with that rancher dude in Wyoming with the chickens and the laser pointer. Still can’t believe we got that on tape.

You can pay nearly $1K for the top dog, the Core i7-4960X, that tops out at 4GHz, or you can pay just over half that for the same basic thing with 3MB less cache and a 0.1GHz lower peak clock. And these things come unlocked, so you can set your own clock frequencies in the BIOS. Seriously, no one should buy the 4960X when the 4930K exists, but you know people will. I’d kill to know the sales breakdowns on those.

Well, that’s hyperbole. I might put out some feelers with our friends up in CORPINT, though. Kinda interesting.

Man, looking over that table, I just don’t see much difference between this new 4960X and the Core i7-3960X that came out two years ago. Both have six cores, twelve threads, 15MB last-level cache, and a 130W power envelope. The new chip’s base clock is like 300MHz higher, and its peak clock is 100MHz higher, but that’s about it. Intel even did a 3970X last year, with a 3.5GHz base and 4GHz peak, so we’re talking baby steps here.

The only other change I see in the table is that 1866MHz memory now has the official blessing. That’s nice, but the X79 already has quad memory channels. I wonder if it matters.

That last, “cheap” model, the 4820K, is priced below the top regular desktop CPU, the 4770K. The 4770K is a newer architecture, but it kinda has less of everything else, like cache, memory channels, and power headroom. I’ll betcha the 4770K is still faster for most stuff.


Block diagram of the X79 platform. Source: Intel.

I guess the 4820K exists as kind of an entry point into the X79 platform. Get a load of those numbers. Four memory channels at almost 15 GB/s each, 40 PCI Express 3.0 lanes coming straight off the CPU. That’s, like, more bandwidth than the NSA backbone.

Well, as far as Congress knows.

The X79 still looks nice on paper, but it is getting kinda old. No USB 3.0? Only two SATA 6Gbps ports? Kinda weaksauce for 2013, really—and those Haswell systems look awfully nice. In fact, let me give you Blanda’s big list of reasons to build an X79-based system and the percentage of people who fit each one:

  • Need more cores, cache, and memory bandwidth for a real application. (2%)
  • Need higher memory capacity for actual workloads. (4%)
  • Need more PCIe lanes for multi-GPU configs. (1%)
  • Thinks you need more PCIe lanes for multi-GPU configs. (12%)
  • Need more knobs for extreme overclocking. (5%)
  • Bragging rights, money > sense. (51%)
  • Clicked the wrong button on Falcon Northwest online store. (27%)

Really, people make less sense the more you know about them. Anybody who works here knows what I mean.

Our testing methods

This page is just boilerplate. He think he’s all fancy showing exactly how he tested everything, but nobody reads it. You should see the number of e-mails he gets—even from inside of chip companies, no joke—asking basic questions about the test setup.

The test systems were configured like so:

Processor
AMD FX-4350

AMD FX-6350

AMD
FX-8350


AMD A10-5800K

AMD A10-6700

AMD A10-6800K

Motherboard Asus
Crosshair V Formula
MSI
FM2-A85XA-G65
North bridge 990FX A85
FCH
South bridge SB950
Memory size 16 GB (2 DIMMs) 16 GB
(2 DIMMs)
Memory type AMD
Performance

Series

DDR3 SDRAM

AMD
Performance

Edition

DDR3 SDRAM

Memory speed 1600 MT/s 1600 MT/s
Memory timings 9-9-9-24
1T
9-9-9-24
1T
Chipset

drivers

AMD
chipset 13.4
AMD
chipset 13.4
Audio Integrated

SB950/ALC889 with

Realtek 6.0.1.6873 drivers

Integrated

A85/ALC892 with

Realtek 6.0.1.6873 drivers

OpenCL
ICD
AMD APP
1124.2
AMD APP
1124.2
IGP
drivers
Catalyst
13.5 beta 2
(Trinity)

Catalyst 13.101 RC1 (Richland)

 

Processor
Core i3-3225

Core i5-3470

Core
i7-2600K

Core i7-3770K

Core i7-4770K Core i7-3970X
Core
i7-4960X
Motherboard Asus
P8Z77-V Pro
Asus
Z87-Pro
P9X79
Deluxe
North bridge Z77
Express
Z87
Express
X79
Express
South bridge
Memory size 16 GB (2 DIMMs) 16 GB (2 DIMMs) 16 GB (4 DIMMs)
Memory type Corsair

Vengeance Pro

DDR3 SDRAM

Corsair

Vengeance Pro

DDR3 SDRAM

Corsair

Vengeance

DDR3 SDRAM

Memory speed 1600 MT/s 1600 MT/s 1600 MT/s
1866 MT/s
Memory timings 9-9-9-24
1T
9-9-9-24
1T
9-9-9-24
1T
9-10-9-27
1T
Chipset

drivers

INF
update 9.4.0.1017

iRST 12.5.0.1066

INF
update 9.4.0.1017

iRST 12.5.0.1066

INF
update 9.4.0.1017

iRST 12.5.0.1066

Audio Integrated

Z77/ALC892 with

Realtek 6.0.1.6873 drivers

Integrated

Z87/ALC1150 with

Realtek 6.0.1.6873 drivers

Integrated

X79/ALC898 with

Realtek 6.0.1.6873 drivers

OpenCL
ICD
Intel SDK
for

OpenCL 2013

Intel SDK
for

OpenCL 2013

Intel SDK
for

OpenCL 2013

They all shared the following common elements:

Hard drive Kingston
HyperX SH103S3 240GB SSD
Discrete graphics XFX
Radeon HD 7950 Double Dissipation 3GB with Catalyst 13.5 beta 2 drivers
OS Windows 8
Pro
Power supply Corsair
AX650

Thanks to Corsair, XFX, Kingston, MSI, Asus, Gigabyte, Intel, and AMD for helping to outfit our test rigs with some of the finest hardware available. Thanks to Intel and AMD for providing the processors, as well, of course.

We used the following versions of our test applications:

Some further notes on our testing methods:

  • The test systems’ Windows desktops were set at 1920×1080 in 32-bit color. Vertical refresh sync (vsync) was disabled in the graphics driver control panel.
  • We used a Yokogawa WT210 digital power meter to capture power use over a span of time. The meter reads power use at the wall socket, so it incorporates power use from the entire system—the CPU, motherboard, memory, graphics solution, hard drives, and anything else plugged into the power supply unit. (The monitor was plugged into a separate outlet.) We measured how each of our test systems used power across a set time period, during which time we encoded a video with x264.
  • After consulting with our readers, we’ve decided to enable Windows’ “Balanced” power profile for the bulk of our desktop processor tests, which means power-saving features like SpeedStep and Cool’n’Quiet are operating. (In the past, we only enabled these features for power consumption testing.) Our spot checks demonstrated to us that, typically, there’s no performance penalty for enabling these features on today’s CPUs. If there is a real-world penalty to enabling these features, well, we think that’s worthy of inclusion in our measurements, since the vast majority of desktop processors these days will spend their lives with these features enabled.

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Memory subsystem performance

Ok, let’s look over this guy’s results and see if there’s really anything interesting going on here.

Yeah, there’s the X79’s claim to fame: memory bandwidth. Looks like that bump up to 1866MHz memory paid off for the new Ivy Bridge-E chip, too. This thing more than doubles the throughput of the Haswell Core i7-4770K.

Heh, ouch. Ivy-E gets pwned by Haswell in L1 cache bandwidth, even with six cores to Haswell’s four. I believe Intel doubled the L1 cache’s throughput in Haswell because AVX2 needs the bandwidth. Having 50% more Sandy or Ivy Bridge cores doesn’t give you enough to keep up.

Yawns.

We have any more of that diet Bawls soda, Jamaal? I need something to wash down these ranch Corn Nuts.

Some quick synthetic math tests

Haha, yikes. The 4770K beats the new Ivy-E 4960X in a couple of these tests. You’ve gotta figure that’s AVX2 kicking in. Doubles the floating-point throughput in a lot of cases. The Ivy-E is never slow, but it’s kinda embarrassing to get caught by a quad-core with under half the memory bandwidth.

Crysis 3



So there’s a lot of stuff. This guy likes to show more than just frames per second when talking about game performance. He says there’s more to it than just averages. I think there’s probably something to that, since he found some problems with Radeons a while back that were apparently real issues. He’s kinda pretentious about it, though. I’m like, dude, it’s just a frigging video game. Get over yourself.

Anyhow, looks to me like these numbers put the 4960X in a good light, but it’s not really meaningfully faster than the quad-core Haswell or Ivy Bridge chips from the past couple of years.

The contrast with some of the recent AMD “APU” chips is kinda brutal. They all seem to do pretty OK with FPS averages, but these other numbers don’t look so good. There are some nasty spikes in the plot for that A10-6800K, too, which is kind of the point, I guess. You’re probably gonna feel those slowdowns.

Far Cry 3



Hey, Jamaal, you like Corn Nuts? These ranch ones are my favorite snack, hands down, but I kinda worry about whether I’m gonna split a tooth one of these days.

Tomb Raider



This is a pretty cool game, although I could only watch Lara bounce around on screen for so long before I had to schedule some “alone time” with Rachel. First time a video game ever had that effect on me. Took me two weeks to finish the main quest, but it was a good two weeks.

Looks like you could run Tomb Raider on a pocket calculator, though. That horrible purple latency curve for the A10-6800K that’s so much higher than everything else? Tops out at under 25 milliseconds. That’s like 40 FPS for the minimum.

Metro: Last Light

Seriously, just an FPS average, after all of that? Lame. He’s padding this thing out.

Productivity

Compiling code in GCC

TrueCrypt disk encryption

Dude, look, they’re gonna encrypt their stuff! Mwahahaha. Hahaha. Hah.

Heh. Man. Heh. Sniff.

They do keep trying, don’t they?

7-Zip file compression and decompression

SunSpider JavaScript performance

Video encoding

x264 video encoding

Handbrake video encoding

Image processing

The Panorama Factory photo stitching

picCOLOR image processing and analysis

Well, at least there’s an advantage to having those six cores in some of these programs. The 4960X also seems to be a little quicker than the 3970X. If I recall correctly, the Ivy core is supposed to be good for like four to six percent more oomph than Sandy, clock for clock.

3D rendering

LuxMark

Cinebench rendering

POV-Ray rendering

Finally, the 4960X puts the 4770k in the rear-view in the rendering tests. Guess those don’t support AVX2 yet, huh?

Scientific computing

MyriMatch proteomics

STARS Euler3d computational fluid dynamics

Here’s an actual reason, from Blanda’s big list, to get yourself a 4960X system: scientific computing, where more cores and memory bandwidth really matter.

Power consumption and efficiency


Oh man. I see some problems with this one. First of all, the idle power draw on their X79 motherboard, the Asus P9X79 Deluxe, is way too high. Have a look over here, and you’ll see the same basic setup on a Gigabyte board drawing 63W at idle. So that’s bogus. Also…

No way is that x264 encoding workload using all six cores and 12 threads fully. Check the plots. The encode takes the same amount of time, about 45 seconds, on the 4960X and on the quad-core Haswell 4770K. That’s gotta be why the delta between idle and load on the 4960X is only 60W. He picked the wrong workload for a six-core CPU.

This guy is usually pretty meticulous about how he tests. That’s gotta be giving him an OCD flare-up.

Hmm. Well, even with those problems, this looks like progress from the 3970X to the 4960X.

Overclocking

Oooh, so he got Ivy-E up to 4.7GHz. Gotta admit, I’d be more impressed if he hadn’t taken the 3960X to 4.8GHz way back when. Still, that’s six cores running as fast as his Canadian sidekick got the Haswell quad to go.

Ok, Jamaal, so I have a confession to make in this context. Please just hear me out, OK? I really have been following this stuff for a while, and I knew Ivy-E was coming. I really wanted to see how it would OC, since, like I said, I’m thinking about building myself a new gaming rig. So… the other day—I’ve kinda been monitoring this dude’s webcam to see how his overclocking tests were going.

It was just too easy. He’s running Windows 8, and you know everything Redmond has done for us, how easy they’ve made it. I just flipped it on in stealth mode, and he was none the wiser. I mean, it’s frigging amazing how trusting geeks are of that little light that says whether the camera is on or off. Like it’d be some kind of physical impossibility to turn up the CCD without the LED coming on.

Anyway, it was totally innocuous, just some guy in a lab listening to Mumford, singing horribly off key, and futzing around with BIOS options. Really, nothing that would raise any flags in oversight, if they weren’t already totally overwhelmed with the whole Snowden deal.

He starts out with a big tower cooler on the CPU, looked like maybe a Frio OCK, big momma, with the fan running full tilt. Sounded like a Dyson with tuberculosis. He gets it up to 4.7GHz at 1.35V, and under load, temps are topping out at around 81°C. The cooler is just barely keeping up with the heat load, but it’s stable.

Then he pulls the tower cooler and hooks up a water cooler—a nice one, Asetek design, dual fans and pretty beefy radiator, although not really any bigger than the twin fans and heatsink on the Frio. I dunno, maybe he thinks water is magic. He leaves the CPU running at the same settings, and lo and behold, the water cooler can’t quite cut it. The fans are cranking, the pump is going, but temps creep up under load until the system blue-screens.

Dude was totally better off with air cooling, because it turns out water is not magic.

At least, you know, the performance is nice at 4.7GHz, if you don’t mind the noise.

Conclusions


So, these scatter plots with price and performance kinda put things into perspective, don’t they? The 4960X really is the fastest desktop CPU overall, but it ain’t exactly blowing my skirt up here compared to the 3970X.

You know what would probably look great on these plots? That Core i7-4930K that costs under 600 bucks and runs at almost the same exact speed. You’d think Intel would want to see those parts tested, too, but nope. Believe me; I’ve read the emails. They’d rather send reviewers only the $1K-ish product and keep the focus there. Seems strange, since resources like these scatter plots happen to exist.

And I’m still kinda hung up on the “everything doubled” version of this chip that Jenkins claims we’re testing. Shouldn’t it be the $1K part? Even if folks don’t need all of the cores, they could at least sell a version with six cores and a titanic L3 cache. People would dig that. As things stand, Ivy-E sure looks like more of a cost saver for Intel than anything else.

Oh well. That gaming plot has me convinced. I’m gonna buy a 4770K for my gaming rig. Should be just as fast as the 4960X, but a third the price and way more power efficient. Besides, I’m gonna need the money for dental work if I keep eating these Corn Nuts.

End NSA intercept. //X-KEYSCORE v.10.0.24.3.

You can monitor my activities constantly via Twitter.

Comments closed
    • itachi
    • 6 years ago

    Damn performance gain so irrelevant it’s dumb, I’m definately going for that 4770k … PS: If intel would cut the price of the 3770k Maybe I would get it.. uhg but wait it’s nearly the same performance so why would they reduce it?? boo

    • itachi
    • 6 years ago

    Lol you should have hyperlinked the chicken and laser stuff 😀

    • moose17145
    • 6 years ago

    Soo…. Looks like I will be hanging onto my i7-920 rig for a while longer then…

      • TwoEars
      • 6 years ago

      Exactly the same system and exactly the same conclusion…. rock stable, plays all current games, does everything I need.

    • BIF
    • 6 years ago

    Has anybody else noticed that IB-E has only 6 cores and the “2 blanks” that were formerly present in SBE are absent here?

    If I was a bettin’ man (and I am), I’d bet that Intel has decided that there will NOT *ever* be an 8 core version of this chip.

    So is it my turn to be “not impressed”?

    I want it all, dammit! Or at least the possibility of having it all.

      • BIF
      • 6 years ago

      Wow, that other article just made me a liar.

      I’m so embarrassed now, I can’t even bear to link to it.

      For my punishment, I may have to buy one…

      • Krogoth
      • 6 years ago

      The 8-core and 12-core version of silicon are actually an entirely different part. They are Ivy Bridge-EPs not Ivy Bridge-E. They both come from the same source, the Ivy Bridge-Es are recycled silicon that didn’t make the full Ivy Bridge-EP cut.

    • kamikaziechameleon
    • 6 years ago

    Where is Haswell-E? This is old tech!

      • Krogoth
      • 6 years ago

      It will eventually come down the pipeline. Intel has always been conservative with their server/workstation line and they making sure that it undergos extra QA. The lack of competition at this range doesn’t help things either.

    • kamikaziechameleon
    • 6 years ago

    This type of humor is lost in a long winded technical article. I hate skipping to the end, sadly it wasn’t a reprieve.

    • dashbarron
    • 6 years ago

    Great, look what you’ve done Cyril. This is what happens when people like you are complacent. I told Scott this at the BBQ.

    • matnath1
    • 6 years ago

    This review has injected some light hearted humor into the new format that is greatly appreciated.

    In fact:

    IT kicks more ass than a pair of donkeys in an MMA cage match. (Radeon HD4850 review remember)

    Cracked me the _ UP!

    To those who didn’t like it:

    It’s just a review get over yourselves!

      • bre
      • 6 years ago

      I agree fully, matdem.

      actually, I think it was a more interesting read than usual TBH (but that’s maybe more a subjective thing, maybe because this writing style is “new and different”).

      Scott, I found this mix of cold hard information (test results), satire and “political” commentary a brilliant read. I actually went over all the pages quickly just to read the faux NSA commentary snippets :P, and I usually don’t read all the pages thoroughly.

      I look forward to reading from your NSA characters again.

      Thumbs up to you Scott!

    • Klimax
    • 6 years ago

    Some interesting things.
    1. Top A10 is subpar – beaten by i3 all the time (which is cheaper here), while games rarely using all resources. (note: new consoles won’t help)
    2. Shows that new microarch did bring new things and improved performance.
    3. Should show that it won’t matter much what way IHS is attached. Physics and fine-tuning of process overrides wishes of OCers. (No. of transistors + size of die = ?)

    And lastly, format was fun but not good for review. (Similar things are best for game reviews when used properly, but not this type)

    • Auril4
    • 6 years ago

    I no longer know anyone who uses a PC on a daily basis anymore for personal use.
    These new CPU introductions are reaching a smaller and smaller audience every year.
    I remember the day when family and friends would talk at length about computer hardware during get-togethers. No one does that anymore.

      • Airmantharp
      • 6 years ago

      We were browsing the Internet, playing games, and accomplishing significant productivity work on 486’s. The need for super-fast systems for desktop use is gone; now, it’s either whatever’s cheapest, or it’s a niche high-end setup to be used as an actual workstation.

      • f0d
      • 6 years ago

      its the opposite for me
      i dont see any use in tablets (for me anyways) and all my friends and family play their games on PC
      sure tablets are good for looking at webpages but i cant see much other uses for them (again for me personally – even then id prefer my big screen for reading webpages)

      id hate to try and convert a blu ray (no power) or type up documents (no keyboard) and even playing games on one (small screen) – it would be painful

      but these are my personal uses – im sure other people find them handy for stuff

        • JustAnEngineer
        • 6 years ago

        I can stick the Nexus 7 in my front jeans pocket. My gaming PC doesn’t fit.

          • f0d
          • 6 years ago

          i tried to game on a tablet i really did but i find the games on it are horrible and i cant get used to such a small screen

          im not really into indie games and the types of games that are on tablets – i like games like firefall planetside 2 crysis 3 battlefield games (bf2/bf2142) etc etc, the kind of games on phones and tablets just dont interest me

          i have no need to game anywhere else but home – if im at work im working, if im at a friends house we are usually playing games on his pc, if im at a pub im drinking and i would probably do something silly to break a tablet lol 😛

          some people obviously have uses for them but i have pretty much none – as i said before im just talking about me personally that I (and my friends/family) dont have any use for a tablet yet

          • cygnus1
          • 6 years ago

          wtf kind of jeans do you wear?

      • Anonymous Coward
      • 6 years ago

      Thats funny, I have both new & large fancy smartphone and a new & large fancy tablet, the phone makes calls and the tablet is OK for the weather forecast. Could have saved myself the expense, really.

    • jihadjoe
    • 6 years ago

    Hey NSA guy! I know you’re reading this…

    Got any pics of Rachel?

      • Scrotos
      • 6 years ago

      Or that cheerleader!

    • Geonerd
    • 6 years ago

    How many threads are you assigning for the Euler test? Since each thread chews a LOT of memory bandwidth, best performance usually occurs at one thread thread per core. Simply assigning ‘a lot’ will give erratic, erroneous results. (I seem to recall you guys feeding a Thuban 8 threads during a CPU comparison, which killed it’s performance.)

      • Damage
      • 6 years ago

      For this article, in Euler3D, we tested with the same number of max threads as the number of hardware threads on the CPU.

      In my experience, one thread per core is best for architectures without SMT, but two per core is better with two-threaded SMT aka Hyper-Threading.

      [url<]https://techreport.com/review/18799/amd-phenom-ii-x6-processors/13[/url<] [url<]https://techreport.com/review/19196/intel-xeon-5600-processors/8[/url<] As you'll see in the second link, things get more complicated with two sockets and NUMA, but we've never had negative scaling effects in Euler3D from two threads per core with SMT.

    • Meadows
    • 6 years ago

    Couldn’t read this. Terrible.

    Don’t do this.

      • Master Kenobi
      • 6 years ago

      I didn’t find this article amusing. The information was fine, but it didn’t need to be soaked in tasteless satire for a few days before being sent out.

        • Meadows
        • 6 years ago

        I said the same thing, although phrased shorter.

      • Krogoth
      • 6 years ago

      No fun zone eh?

      Sometime you need to use humor or spice things up when you are dealing with a dry topic. This is just a review for an evolutionary upgrade (a minor one at best).

        • Airmantharp
        • 6 years ago

        I’m impressed that Krogoth is impressed.

          • JustAnEngineer
          • 6 years ago

          I’m meta-impressed with that.

        • Meadows
        • 6 years ago

        I have a well developed sense of humour, I’m British after all. But this review wasn’t fun.

          • Krogoth
          • 6 years ago

          >implying that need you be “British” in order to have a sense of humour.

          >implying that all forms of humour must involve heavy doses of sarcasm and satire.

            • Meadows
            • 6 years ago

            Well done there, I implied neither.

          • Farting Bob
          • 6 years ago

          I’m a Brit and i too didn”t find it that funny. Maybe for the first few paragraphs, then should have gone back to normal. I ended up just skimming the graphs. Not that you need a wall of text to realise that the latest $1000 Intel CPU will be very fast but very shitty value.

          • flip-mode
          • 6 years ago

          That’s weird, I like most Brits.

            • Airmantharp
            • 6 years ago

            I know, right1

        • Klimax
        • 6 years ago

        It shouldn’t make it harder to get information. Jokes, and fun all the way, but not in detracting way.

          • Krogoth
          • 6 years ago

          Not at all. The review is still clear, despite all of the silly overtones. It provides all of the relevant data and information (It is a simple die-shrink nothing more).

            • Klimax
            • 6 years ago

            Graphs are not everything and some notes got buried or not written more. (In even shorter: missing too much)

      • dashbarron
      • 6 years ago

      Buzz Killington to the rescue. Hey! He’s also British.

      • Anonymous Coward
      • 6 years ago

      There was nothing to say, really.

      • Rakhmaninov3
      • 6 years ago

      You’re obviously with the NSA and took the jabbing to heart. Lighten up.

      • kamikaziechameleon
      • 6 years ago

      I just wish they would up and say in 10 words or less this is a bad product. It’s clearly a rip off with no cost to performance value for the consumer, not even the ones who really like to have the “BEST”

      Maybe the satire allowed them to say more than they would have due to the apparently stringent control intel has on them but none the less I found it tasteless.

      The entire article being WRITTEN IN SARCASM is practically trolling for our responses.

    • brucethemoose
    • 6 years ago

    The consumer CPU world is sort of depressing nowadays… Haswell-e and broadwell won’t really bring anything new, Steamroller could catch up to sandy bridge and mildly undercut it if everything goes right, so people waiting for a significant performance bump are stuck waiting for Skylake (if it even brings any more single core performance) and Excavator (if it isn’t canceled).

    I got my Phenom II a month before the 2500k came out… and I’m starting to regret it :/

    • hubick
    • 6 years ago

    I would also like to respectfully register my dislike for this writing style.

    This site was my first choice when looking for a review, but I just want a quick no-nonsense breakdown of what’s new, and couldn’t bring myself to continue reading. Sorry.

      • Deanjo
      • 6 years ago

      Let me sum it up for you, overpriced and underwhelming performance when compared to the much cheaper options out there.

        • Krogoth
        • 6 years ago

        Correction, it is bloody overkill for the masses and only makes fiscal sense if you are running professional applications that take advantage of extra threads. 4770K and 8350 are respectful alternatives if budget is a concern.

          • MadManOriginal
          • 6 years ago

          The 4960X, like all ‘extreme’ processors from both Intel and AMD, will never come out on top on a price/performance curve. However, the price/performance of the second-tier CPU, in this case the 4930K, is often almost justifiable. For someone who can actually max all the cores, and granted that’s obviously not ‘the masses’, there is a price/performance premium but it’s 75% more CPU cost for 50% more multithreaded performance which isn’t bad. Niche? Sure, but higher-end CPUs have been niche since Core 2.

      • LukeCWM
      • 6 years ago

      It was a one-time thing.

      If every article was written this way, yeah, it’d be a problem. But it’s just one time, and I think it is pretty funny. =]

        • indeego
        • 6 years ago

        It’s at least the second time they’ve done this. I think Scott is burnt out on some of these product rehashes, which makes it fine.

          • Xylker
          • 6 years ago

          The first one was absolute genius. This one made me chuckle and I kept reading to see what Blandanna was going to say next.

          Probably one of the best things about this is: “I’m OK with failing, I’ll try this and see what happens. If the idea fails, well, I’ll learn from it.” You really can’t ask for much better. You can dislike this product, register your complaint with the manufacturer and see what happens from there.

            • indeego
            • 6 years ago

            Failing is a wonderful model. More parents should be teaching their kids how to fail repeatedly than succeeding, because you are going to fail a hell of a lot in your life, and might as well do so with grace and control. Offtopic, I’m sure.

            • Airmantharp
            • 6 years ago

            Failure is the best teacher…

      • Kougar
      • 6 years ago

      What’s new compared to what? There’s nothing new here compared to Haswell and the Z87 platform, which I gather was kind of the point for such a sardonic mockery of the 4960X and 2011-era X79 platform.

      About the only thing new is that Intel won’t be updating its X79 motherboards to support Ivy-E chips. Not even Intel is going to bother to support their own IV-E processors, it’s just that bad. 😉

        • Klimax
        • 6 years ago

        C606 mainboard like Gigabyte UD5S…

          • Kougar
          • 6 years ago

          So ONE $300+ model motherboard from GB.

          I will grant that it at least fixes the 2-port native SATA 3.0 limit of X79, but beyond optional SAS ports what does it offer over Z87??

            • Klimax
            • 6 years ago

            Not easiest:
            [url<]http://ark.intel.com/compare/75013,64015,63986[/url<] ETA: RAID5

        • Bauxite
        • 6 years ago

        Yeah, its laughable that most of the other oems will be putting out bios updates but intel can’t be bothered. I have another intel-branded board that won’t work properly with two different intel nics as well.

        BTW, X9SRH-7TF is the 2011 board to get, even though its not “new” 🙂

    • Derfer
    • 6 years ago

    Are there any decent reviews of this out? I can’t find any so far. Seems it’s taken so long for Ivy-E to come out that reviewers forgot all the points of interest. Like does the memory now clock like it does on Z77 with Ivy (or better), which would put out some huge memory bandwidth numbers. How are the temps on Ivy with solder? Same as Sandy? Does this settle the debate that the only cause of Ivy’s increased heat is the paste and not the 3d transistors? It certainly disproves the rumors that Ivy can’t be soldered due to the top layer material.

      • theadder
      • 6 years ago

      [url<]http://www.anandtech.com/show/7255/intel-core-i7-4960x-ivy-bridge-e-review[/url<] Anand does some good stuff.

    • Ushio01
    • 6 years ago

    Off to Anandtech for the review instead as reading this is giving me a headache.

    I’m visiting Techreport less and less as the tech is replaced by nonsence.

      • indeego
      • 6 years ago

      Thank you for registering an account, your comment, the slight irony in the misspelling in your comment, and leaving!

      • Airmantharp
      • 6 years ago

      Anand actually wrote their Ivy-E review- so yes, it is well done. But on average, I’ll take TR every time, as Anand has been less and less involved in their PC side in the last few years.

        • September
        • 6 years ago

        As has Intel…

          • Airmantharp
          • 6 years ago

          I’d say that they’re pretty well involved- just not in pushing absolute performance, as much as making that performance more efficient.

          Yeah, I could use more power, but lower power usage and lower noise are both still very large goals of computing.

          And it’s not like they’re getting slower, looking at you AMD!

      • pidge
      • 6 years ago

      I like Anandtech but I don’t visit them often because it is too focused on mobile. I’m more into PC hardware which I upgrade more often than I do my cell phone. I like TR for their PC coverage.

        • mcnabney
        • 6 years ago

        I visit Anand less because there is too much political crap displacing actual tech,

          • just brew it!
          • 6 years ago

          Unfortunately, technology (and the implications thereof) have become increasingly political. Fact o’ life.

          If politics is going to intrude into tech, I’m not going to second-guess Scott’s decision to do the reverse by injecting a little political satire into what would’ve otherwise been a run-of-the-mill review of a CPU few of us can afford anyway.

    • HisDivineOrder
    • 6 years ago

    I still think Intel could give us a version of the hexacore without hyperthreading. Instead of choosing between 6 cores + hyperthreading versus 4 cores + hyperthreading, they also throw in a hexacore without hyperthreading for the same cost as the quadcore with hyperthreading.

    I suspect many would choose the sans hyperthreading option and see nothing but gain from it for certain kinds of loads.

      • Airmantharp
      • 6 years ago

      They certainly could- but it doesn’t make much sense from a manufacturing perspective. HT is built in to every Core CPU, and selectively disabled, while the hex-core SKUs are actually far larger silicon, and currently only ship for the high-end.

      We’d need to see hex-core LGA115X SKUs before we see a hex-core without HT, I’d think.

      What I’d really like, though, is for Intel to ship a non-EE octa-core SKU with HT. I’d pay the ~$700 such a CPU would reasonably go for, and I’d definitely get my money’s worth out of it.

        • NeelyCam
        • 6 years ago

        Do we know if there is a special, 4-core IB-E version that’s [b<]not[/b<] a 6-core with two cores disabled?

          • Airmantharp
          • 6 years ago

          I’m actually not sure, and I’m not terribly inclined to look it up- while there is a definite advantage for having quad-channel memory and a full forty PCIe 3.0 lanes even with a quad-core CPU, that advantage is pretty well lost to me, personally. To me, those are just periphery features next to getting a full ~50% boost in CPU performance, considering that you lose some to multi-threading, but gain some because the OS and always-on utility apps are soaking up the first core or so, by welding two more CPU cores on.

      • Klimax
      • 6 years ago

      Hyperthreading has negative performance gain for some loads only in SB and IB. With Haswell that was fixed.

        • Waco
        • 6 years ago

        That’s not entirely true. Any workload that fully saturates (or comes close) to the various CPU resources won’t see any increase from Hyperthreading at all (and will likely go slower due to contention).

          • Klimax
          • 6 years ago

          Biggest observable problems were with integer code. In floating point there isn’t IIRC code which had such pathological cases. (However it didn’t have doubling effect like physical core) And with Haswell changes, integer cannot block floating point or fp int, due to new ports.

          Good infographic is here: [url<]http://www.anandtech.com/show/6355/intels-haswell-architecture/8[/url<] Effect should best seen when there is in one graph SB/IB, Haswell and Bulldozer.(BD because it has double integer cores) and includes both i5 and i7 variants. Note: Tried to find test I though showed effect, but it wasn't there. (AIDA Hash, which is one of few benchmarks, where BD and non-HT CPUs are better then HT) I'm sure somebody had such out there, but can't find it at the moment. The test I looked at had quite jump for HT, but no significant penalty to SB/IB CPUs.

            • Waco
            • 6 years ago

            IIRC Linpack/HPL performs worse with HT enabled versus half the threads on the same chip even with Haswell.

            • Klimax
            • 6 years ago

            Wouldn’t surprise me considering nature of test or its aim.

            (And some games managed apparently to get allergy on HT… like Prototype.)

    • Srsly_Bro
    • 6 years ago

    On page 9 the AMD 8350 is grouped in high end with the 3970x and 4960x. The 4770K was grouped in mid-range. I hope high-end referred to each company’s high-end offering, and not performance.

    • DeadOfKnight
    • 6 years ago

    And yet another win for the mainstream x4 Sandy Bridge. Yes, this really is getting old.

      • Airmantharp
      • 6 years ago

      That depends on how you define success there, Devil Dog!

      The only real disappointment here is X79- Intel could have put the Z87 southbridge on here and given us modern USB3 and more SATA3 ports, and that certainly would have been nice- but it’s far from a show-stopper.

      Success for me is this- real PCIe 3.0 support across all 40 CPU lanes, more robust high-speed memory support (and thus the possibility of lower-latency memory), and Ivy instruction sets and IPC across six unlocked cores.

      Really, all I’m waiting for is the budget oriented boards, and for tweakers to sniff out the most flexible steppings so I can get one as close to 5GHz as possible with an integrated water-cooler.

        • DeadOfKnight
        • 6 years ago

        I define success as performance per dollar. Your current chip is still leading the way there for 99% of applications. Ok, so give it a few years and maybe it’ll only be 90% of applications. I still don’t know how that would justify the upgrade now unless your usage scenario lies primarily in that minority. Even then, from a value perspective, AMD’s Vishera is the better deal for such things. And yes, persisting on X79 just makes it that much more unattractive.

          • Airmantharp
          • 6 years ago

          That’s the thing- a hex-core Ivy-E is really only $300 more platform cost, if you’re going high end anyway, and the performance per dollar is actually an advantage- if you need the performance.

          I could definitely make use of six hyper-threaded cores, even in gaming, but part of that is because I don’t run a ‘lean-n-mean’ gaming setup, I run a full workstation, and I can see that four cores, even at 4.5GHz, has no margin left. It’s topped out, and since going faster than 4.5GHz isn’t really much of an option, throwing more cores at the problem is the only real solution.

            • DeadOfKnight
            • 6 years ago

            It’s a solution to a problem that doesn’t exist and probably won’t exist for the vast majority of games even, so long as consoles are the platform of choice. But hey, we might see a small handful of PC exclusive titles come out that will make use of it to some degree in the next few years.

            I just don’t really see it becoming a matter of necessity. I mean, the developers still want to get their games out to as many people as possible, even on the PC. That means scalability down to the lower end parts and that means not fundamentally changing the games to truly put those cores to good use. The most you’re gonna see is what you see now, a mild boost in performance numbers. Like going from dual-core to quad, only not as dramatic.

            I have no doubt we’ll get there, but we’re just not there yet. I mean, if you’re like all these old guys still running on their Q6600s sure, get the processor that’s gonna last you the longest. However, if you’re prone to upgrading in the next 3-5 years even with the new Ivy-E, I’d just stick with what you have for now as it’s proabably a waste.

            • Airmantharp
            • 6 years ago

            One way to look at it is this-

            Do you remember the severe disparity between what reviewers said would run BF3 ‘well’ based on canned benchmark data from single-player runs, and what it actually took to make the game responsive during a 64-man destruction derby run?

            It’s like that. On consoles, if the game runs slow, it runs slow for everyone- no one has an advantage. And it will run slow. But on PC’s, we have the ability to tinker and optimize for better results. And that’s the thing- games have now caught up to the quad-core CPUs we’ve been using. They can max them out, because they’re actually multi-threaded, and are capable of distributing their most demanding tasks dynamically.

            And that’s where my argument lies. Because future console games will be even more robustly multi-threaded- we’ve already seen that in the likes of Crysis 3, though that is a bad example- having a hex-core CPU for higher-end gaming makes sense, especially if that CPU will get worked out doing other tasks as well; in my more extreme case, rendering photos from Lightroom, layering stuff for HDR, panoramas, or fine-tuned processing in Photoshop, and editing and rendering videos in Premiere takes far more than my 4.5GHz 2500k and 16GB of DDR3-1600 can really give, so upgrading to something like the 4930k with 4x8GB DDR3-2000+ actually makes sense.

            • DeadOfKnight
            • 6 years ago

            In that case, yeah. Personally I’m waiting for a consumer grade 8-core from Intel to make the switch. Rumors say Haswell-E, although that may be limited to the extreme $1000 part so I might be waiting a bit longer. At any rate, I think that’s worth the wait as I doubt anything will be “maxing” it out for the remainder of this decade or more, at least for gaming. I just don’t think now is the best time, especially with Intel slacking as AMD tries to catch up. They’re gaining on them, and I’m hoping for that race to pick up some speed and actually force them to release some truly “enthusiast grade” components.

            • Airmantharp
            • 6 years ago

            Definitely.

            I really only have two reasons to upgrade- the first is that I’m getting heavier into content creation and I can definitely make use of the performance there, and the second is that I realize that future games will be able to make use not only of the extra cores, even if they’re not ‘needed’, but also the extra PCIe bandwidth provided by a pair of PCIe 3.0 x16 slots versus the pair of PCIe 2.0 x8 slots my 2500k is limited to. Of course, I’ll likely need to upgrade my GPUs, too, but I expect to do that after the CPU/motherboard/RAM.

            • DeadOfKnight
            • 6 years ago

            The thing about content creation is the gains are usually measured in minutes rather than miliseconds. It’s far more annoying for something to be unresponsive that should happen intantaneously than for something you expect to take some time to take a little bit longer.

            Your PCIe 2.0 lanes still aren’t a bottleneck, but I see what you mean. They could become the bottleneck if you’ve got a setup that will utilize all of the VRAM of a couple Titans. Still, not a major win as far as performance per dollar.

            • Airmantharp
            • 6 years ago

            Nope, the win is for content creation- and a significant one at that, moving from four Sandy cores to six Ivy cores, along with more bandwidth and lower latency, as I can buy better RAM this time around. The only mention for gaming is that it’s not a total loss, really- as I expect VRAM usage to go up significantly at ‘high’ and ‘ultra’ settings levels in the next year or so, the PCIe bus speeds will likely start playing a bigger factor than they have historically. Granted moving to Haswell would mostly erase that problem as well, but that CPU would still be significantly slower for intensive uses other than gaming.

            • DeadOfKnight
            • 6 years ago

            Still, that DDR4 might be an even bigger win next year, and you wouldn’t be giving up the latest instructions and IPC improvements. Not saying you shouldn’t move up to the “E” platform. Obviously Intel wants you to by taking away fluxless solder, TSX on “K” SKUs (not that that’s even a factor when looking at Ivy-E) and all of that. That might be exactly why they did it, because without some artificial neutering of their chips, Haswell seems like more of an enthusiast option than their $1000 Extreme Edition. There’s certainly a lot more features, especially when you consider the GPU and quicksync. Ivy-E just has two extra cores, more PCIe lanes (which shouldn’t matter too much on 3.0), and double the memory channels (which you pay even more for through additional RAM sticks, typically).

            I dunno, it just seems like a bad time to me. It seems like the calm before the storm. We’ve all been fairly unimpressed with the lackluster improvements these last couple years. Granted, they did come with some nice power efficiency and IGP gains, even if those weren’t much to get excited over. I just feel that something big is bound to happen for us now that they’ve probably reached their goals in those areas. They’ve got comparable graphics to Trinity (better in the Iris Pro) and they’ve pushed TDPs down so far that mobile-binned parts have impressive battery life. I can’t tell you what to do with your money, but you might be in for some buyer’s remorse.

            • Airmantharp
            • 6 years ago

            It sure doesn’t seem like a ‘good time’, you’re right on that point. And I’ll always be in for buyer’s remorse, I’m used to it now :).

            But I don’t see DDR4 being a significant improvement over top-end DDR3 on release- DDR3 sure wasn’t over DDR2, and the predictions for DDR4 performance put it at about parity, or possibly worse, except from a power usage standpoint (which we’re throwing out the window anyway with an -E platform).

            I’m looking at this purely from a performance perspective- for a mobile platform, a Haswell-based Retina Macbook Pro is where it’s at.

            • DeadOfKnight
            • 6 years ago

            Yeah, DDR4 isn’t anything exciting, but it will be replacing DDR3 so for longevity and the ability to recycle these parts in the future it seems wiser not to invest in hardware that is nearing obsolescence.

            The way I see it, the next generation of computing technology begins next year. You could say it’s coming earlier with the consoles, but the DIY hardware those are based on (in a single APU) isn’t even due until 2014. (A bit of assumption here, as they’ve been teasing a “Kabini-Zilla” chip, but haven’t said when it’ll come out.)

            We’re moving towards fully-integrated HSA PCs, and even as a power user you will want your system to bare some resemblance to such things, albeit with beefier components. It’s all about what the devs are targeting.

            I’m not talking about just the CPU here, but the GPU. Nvidia is giving their new Maxwell architecture an awareness of the CPU and resource sharing, something I expect AMD is up to as well. Right now is a bad time.

        • jihadjoe
        • 6 years ago

        If by success he meant longevity, then yes Sandy Bridge has indeed proven itself a true winner.

        Nearly half a decade on and the 2500k/2600k can still hang with the fastest contemporary processor, and I’m sure early adopters of that generation will be reading this report and giving themselves well-deserved congratulations on money well-spent, or not spent as the case may be.

          • chuckula
          • 6 years ago

          [quote<]Nearly half a decade on[/quote<] Sandy Bridge launch: January 2011 Date of Post: September 4, 2013 How does ~2.75 years == Five years all of the sudden? This is a psychological effect that is related to Tick-Tock that I have pointed out a few times in the past: When Intel came out with new parts much less frequently in the past, there tended to be bigger performance jumps between them. Now with something new coming out every year, there is this weird time-compression occurring where we think that Sandy Bridge is some relic from the '90s and big bad Intel hasn't made any improvements in decades since SB is just SO OLD. Let me tell you something: I built a 4770K box back in June to replace a Core 2 Duo from early 2008, an [b<]actual[/b<] time span of about 5.25 years. Believe me, anybody who says that Intel hasn't made any improvements in 5 years is [b<]WELCOME[/b<] to pay me the full purchase price for my Haswell box to take away the old Core 2 system if he truly believes that there haven't been any performance improvements in the meantime.

            • ClickClick5
            • 6 years ago

            Exactly. I’m still rocking the 2600k, upgraded from a Q6600. And the jump was very easily noticeable. Now I’m just waiting for a reason to upgrade again. My original goal was Haswell-E, bbbbuuuuuttttt, we shall see.

            • Krogoth
            • 6 years ago

            There isn’t that much of a performance delta from Core 2 family to the Nehalem-Haswell (assuming clockspeed is equal) unless you are running an application that takes advantage of the extra threads and newer instruction sets in a high-end Nehalem-Haswell part. Most of the perceive performance gain comes from the higher stock clock speed (Core 2s hover around 2-3Ghz at stock, while newer Sandy Bridge parts start at 3Ghz and go nearly to 4Ghz). The extra bandwidth from moving to QPI link also helps. Bloomfield and newer can take full bandwidth advantage of DDR3 unlike the FSB setup in Core 2 parts paired with DDR3.

            Power consumption on the other hand, it is a night and day difference especially with Sandy Bridge and newer silicon. Higher-end Core 2 and Bloomfield need a huge HSF and louder fan to keeps themselves cool at 3.0Ghz. SB and newer can chill away at the same speeds with a more modest HSF with a low-RPM fan that is whisper quiet.

            This is coming from somebody who jump from a Q6600@3.0Ghz to a 3570K@4.0Ghz.

            • chuckula
            • 6 years ago

            Sorry Krogs, but you are flat out wrong there and I’ve done the test runs to prove it.

            Before I did the overclock of my Haswell box I did some old-school single thread benchmarks between it and my OC’d Core 2. The Haswell is anywhere from 3.5 – 3.9 GHz (assume 3.9GHz for turboboost) and the Core 2 was at 3.6 Ghz, so the clockspeed delta is less than 10%. A few of the benchmarks that I ran include the very old BYTE number-crunching runs that are *not* multithreaded and are *not* using AVX instructions or the like, and I also ran the nice and inefficient pystone benchmark. All in all my Haswell box was about double the performance of the Core 2 using these benchmarks that are specifically *not* taking advantage of the modern features in new CPUs.

            Believe me, once the software gets more modern, the performance delta spikes through the roof. Using an optimized version of Linpack my Haswell box is putting up ~247 GFlops of double-precision number crunching compared to ~25 GFlops for the Core 2, so that’s a jump of nearly 10x computational power.

            • Krogoth
            • 6 years ago

            You realize that Haswell unit has more bandwidth at its disposal and hyper-threading on the higher-end models? Core 2 was known to held back by its FSB once you crank-up the clockspeed. Back in the day, Highly OC’ed Phenom II OC’ed back were actually faster than highly OC’ed Core 2 Quads because of that. However this was overshadowed by new Bloomfield chips.

            BTW, did you have a quad-core Core 2 or dual-core verison? If it is the dual-core version, then it is no surprise that Haswell unit would yield a 100% or more gain in multi-threaded applications.

            • chuckula
            • 6 years ago

            [quote<]You realize that Haswell unit has more bandwidth at its disposal and hyper-threading on the higher-end models?[/quote<] Fascinating, two posts ago it was "Intel hasn't made any improvements, they just upped the clockspeeds to fool everyone." Now all of the sudden Intel is "cheating" by adding all these features... Yeah! Except for all the improvements that Intel has made over the last 5 years, they haven't made any improvements at all!

            • Krogoth
            • 6 years ago

            You are a fool. I never stated anywhere that there were no improvements over the last generations of Intel silicon. I’m merely stated that the performance delta hasn’t grown that much over the last several generations (50-60% at best if Core 2 is the base-line, assuming clockspeed is equal). Most of the perceived performance bumps come from the higher clock speeds. Bandwidth helps, but it they isn’t net an universal improvement (a.k.a applications that don’t take advantage of it nor need it).

            I have play around with both platforms with real software not silly synthetics. I did see a performance boost with i5-3750, but wasn’t that earth-shattering. The only thing that was earth-shattering is that i5-3750 could do it without being a blast furnace.

            There are a ton of Bloomfield users who are perfectly content with their systems. They are almost as fast as Haswell in applications that don’t take advantage of the newer instruction sets.

            • jihadjoe
            • 6 years ago

            I stand corrected. Half of half a decade, then.

            • chuckula
            • 6 years ago

            I’ll give that answer quarter.

          • Airmantharp
          • 6 years ago

          I’m one of those enjoying said longevity, for sure-

          Just note that a) I’m finding that, personally, I could use more CPU performance and that b) said longevity isn’t necessarily a good thing- it means that performance hasn’t moved forward enough for software to be written to really demand more performance on a large scale.

          • Krogoth
          • 6 years ago

          Sandy Bridge family is almost 3 years old (Came out in Q1 2011). It is Bloomfield family that came out 5 to 6 years ago.

            • JustAnEngineer
            • 6 years ago

            Sandy Bridge launched on January 9, 2011.

            • Krogoth
            • 6 years ago

            Which is within Q1 2011….. So?

            • moose17145
            • 6 years ago

            Yea! Lol sandy bridge… she’s got nothing on my i7-920 bloomfield when it comes to longevity! I’m still waiting for a compelling reason to upgrade! And I built my system in december of 2008!

            Honestly the most compelling thing I see about these is some of the newer virtualization stuff built into them that my 920 lacks. And honestly for that kind of stuff I a would rather be looking at the Ivy EP chips with 8-12 cores anyways, because virtualization IS something that can most definitely make use of moar coars (and also moar rams).

    • Sargent Duck
    • 6 years ago

    Usually I read the front page (for the witty remarks), quickly look at a few graphs then read the conclusion.

    But this write-up was pure gold. Bravo sir!

    • Deanjo
    • 6 years ago

    I WARNED YOU ALL!!!

    [url<]https://techreport.com/news/25295/enter-here-to-win-an-msi-geforce-gtx-760-graphics-card-and-corsair-hx1050-psu?post=755646[/url<]

    • tanker27
    • 6 years ago

    I got a chuckle out of the article. Good one Scott. 😛

    • ronch
    • 6 years ago

    Looking at the power consumption graphs, it looks like Intel’s ‘EE’ chips use lots of power even at idle. What could be causing it? Is it the copious amounts of cache that always need to be fed with juice? Not sure because it’s not like the EE chips have a lot more cache per core (IB has something like 1.5 to 2MB per core depending on the SKU while the EE chip reviewed here has 2.5MB/core). But if you compare Vishera with Trinity, Trinity’s idle power numbers are awesome. Note that Trinity doesn’t have Vishera’s oceans of L3 cache.

    It almost seems like putting in lots of L3 does hurt power consumption.

      • Airmantharp
      • 6 years ago

      It’s the platform- X79 plus all of the third-party controllers to make up for the feature disparity with Z87, along with four memory channels and forty lanes of PCIe 3.0, means that the power-draw floor is set pretty high.

      Now, if you actually use all of that stuff, the comparison with other platforms is moot- none of them can deliver the performance, and I’d be willing to bet that the ‘task’ efficiency of Ivy-E and the X79 platform is still comparative to IB, at least, while being significantly faster.

        • ronch
        • 6 years ago

        I’d love to see the actual power draw of this CPU. Nonetheless, all those extras, justified as they may be, will inevitably suck your power outlet.

      • JumpingJack
      • 6 years ago

      To make a good conclusion you would really need to analyze power usage at the socket rather than the entire system. The chipset in particular is a major source of power draw.

        • ronch
        • 6 years ago

        Yes, but we’re talking an increase from 46w (3770K) to 85w (4960X). And these are idle powers, so those idle cores are gated off. And as I’ve said, it’s not like there’s a lot more cache.

        • just brew it!
        • 6 years ago

        Unless the CPU has an on-die power usage monitor (like the AMD Bulldozer/Piledriver CPUs have), that’s going to be rather difficult.

      • Klimax
      • 6 years ago

      In article there is this:
      “First of all, the idle power draw on their X79 motherboard, the Asus P9X79 Deluxe, is way too high. Have a look over here, and you’ll see the same basic setup on a Gigabyte board drawing 63W at idle. So that’s bogus.”
      And here points to [url<]https://techreport.com/review/24996/nvidia-geforce-gtx-760-graphics-card-reviewed/9[/url<] It's Asus mainboard not CPU...

    • ronch
    • 6 years ago

    Hey guys, I can’t read this review properly, or at least I can’t read what Scott actually wrote. It seems like some unknown jerk out there has hacked this article and replaced what Scott wrote with his own crap. I’m serious, gerbils. I posted about this problem in the forums [url=https://techreport.com/forums/viewtopic.php?f=1&t=89302<]here[/url<]. Scott, what do you make of this? Edit - Ok, is Scott just playing around with this? Some of the comments on the post I made in the forums seem to think so.

      • Damage
      • 6 years ago

      I can’t seem to edit it, either, and one of the pictures looks like it’s… moving. Man, so weird.

        • ronch
        • 6 years ago

        This jerk here probably has nothing better to do. I suggest he dunk his head underwater for an hour.

        • Cyril
        • 6 years ago

        There is nothing wrong with your television set. Do not attempt to adjust the picture.

        • ronch
        • 6 years ago

        That moving picture… it almost seems scary.

        If anything, this article really makes you feel like the NSA is breathing down your neck. Oh boy.

        • ClickClick5
        • 6 years ago

        Puff the magic dragon lives by the sea…

      • derFunkenstein
      • 6 years ago

      I honestly thought you were kidding in your thread.

      • flip-mode
      • 6 years ago

      Your frontal lobe is functioning below normal levels.

        • chuckula
        • 6 years ago

        Ronch… I thought all my trolling would have properly trained you to the use of sarcasm by now!

        Unless of course you are doing your own spoof by doubling-down on the sarcasm back at the TR staff… in which case I say: Well played sir!

          • flip-mode
          • 6 years ago

          If it’s sarcasm he’s doing it wrong. Sarcasm needs a “tell”.

          • ronch
          • 6 years ago

          No comment.

            • MadManOriginal
            • 6 years ago

            Commenting paradox.

            • flip-mode
            • 6 years ago

            Took a few seconds, then, I laughed.

    • Forge
    • 6 years ago

    The last picture on page 1…. Is moving. I even had a non-sleep-deprived person check. Why is it moving?

      • krazyredboy
      • 6 years ago

      If the motherboard’s a rockin’, don’t bother knockin’…

      • ronch
      • 6 years ago

      I’d ease off with the bubblies if I were you.

      • Krogoth
      • 6 years ago

      It is a recent internet fad. Some people are using an old technique to give 2D images some psuado-3D perspective. The other one shaking the image around to make the person look like they are having a seizure or are they are under extreme pressure.

    • Kretschmer
    • 6 years ago

    Bravo! I was completely unexcited to click on the link for an overpriced and incremental product, but you pleasantly surprised me.

    Would it be possible to include i3 parts on the gaming benchmark pages? If there is room for AMD APUs it should be possible to include Intel’s value line.

      • Chrispy_
      • 6 years ago

      Agreed.

      There’s a distinct lack of 2C/4T results on the Intel side, and the results are not relevant just to i3’s since we’d be extrapolating the info to the mobile line where 2C/4T spans the i3, i5 and i7.

        • Damage
        • 6 years ago

        The Core i3-3225 is in there, just not in the individual colored plots. Look again!

          • JustAnEngineer
          • 6 years ago

          I saw the Core i3 in there. Shall we assume that the Core i5-3470 is standing in for the Core i5-4670K, as well?

          • Chrispy_
          • 6 years ago

          Ah yeah, my bad – I guess I’m just as bored looking at the graphs as you were making them, with predictably similar results to Sandy-E.

          Intel’s processor benchmarks have been dull as ditchwater since Ivy bridge, only the IGP changes have held any interest for me, because that’s the only area with any real progress.

          • Kretschmer
          • 6 years ago

          Thank you; how did I miss that?

    • Krogoth
    • 6 years ago

    Ivy Bridge-E = straight die-shrink of Sandy-Bridge-E that comes with a little less L3 cache and performs as such. Intel just bump the clockspeed of 4960X by a little bit to make it look faster than 3960X it replaces.

    Enough said.

    Great article BTW.

      • flip-mode
      • 6 years ago

      The Krogoth equation:

      (new thing) = [(old thing) + (yadda yadda) / (enough said)]

      The equation does have some shorthand forms:

      [url<]https://techreport.com/discussion/24832/nvidia-geforce-gtx-780-graphics-card-reviewed?post=732936[/url<] [url<]https://techreport.com/discussion/24381/nvidia-geforce-gtx-titan-reviewed?post=710759[/url<] [url<]https://techreport.com/discussion/21987/intel-core-i7-3960x-processor?post=596284[/url<]

        • MadManOriginal
        • 6 years ago

        You forgot the exponent:

        ^(nobody needs more than a mid-range product)

          • Krogoth
          • 6 years ago

          Incorrect, the vast majority don’t need it. The people who have a need for it are professionals were time is money. $500 premium for 2-5% more performance over the previous grade is worth it.

    • ClickClick5
    • 6 years ago

    Excellent. So when I go to finally build a Haswell-E system next year, i’m going to be faced with a rough ~4% boost over the Ivy-E? Herm….
    I miss the 20%+ jumps.

    • chuckula
    • 6 years ago

    I think page 9 (the power efficiency tests) shows that these chips are actually a huge success for their intended audience, which is servers not gaming PCs. These chips are BORING from the perspective of most TR readers, but in the server world the combination of raw performance + power efficiency is going to be very hard to beat.

    A 4960X running flat out with 4 channels of RAM and the whole system uses about the same amount of power as a standard A10-6800K desktop? I think I know why AMD’s server roadmaps only include steamroller in the form of a rebadged Kaveri APU part. They’ve pretty much abandoned large servers and are now focused exclusively on APUs in servers and in “micro” servers with Kabini.

      • ermo
      • 6 years ago

      As much as it pains me to admit it, you’re probably right.

      C’mon AMD, get back in the game!

        • chuckula
        • 6 years ago

        The good news for AMD in the consumer market is that they still have a big IGP advantage. They can do pretty well in microservers as well, although how much money they will be making in a crowded field where Intel is not the only competition is still a question though.

        [Edit: I’ll take it by the downthumbs that the AMD crowd has no faith in the ability of AMD to produce IGPs… sort of sad really.]

    • Bensam123
    • 6 years ago

    Not to poopoo on good humor, but these reviews are really hard to read (like the highschool girl one). I guess there wasn’t really a whole lot to talk about and that’s why Scott decided to dress it up.

    Just another iteration, this is almost as exciting as AMDs small clock bumps to Phenom back a few years ago. The 8350 is still priced in a nice spot on those charts too. ^^

      • Prion
      • 6 years ago

      [quote<]good humor[/quote<] [b<]:([/b<]

    • chuckula
    • 6 years ago

    [quote<] Need more cores, cache, and memory bandwidth for a real application. (2%) Need higher memory capacity for actual workloads. (4%) Need more PCIe lanes for multi-GPU configs. (1%) Thinks you need more PCIe lanes for multi-GPU configs. (12%) Need more knobs for extreme overclocking. (5%) Bragging rights, money > sense. (51%) Clicked the wrong button on Falcon Northwest online store. (27%)[/quote<] I'm glad to see that you guys gave this review the full 102% effort!

      • StainlessSteelMan
      • 6 years ago

      It’s symptomatic that you needed to check ! I would never lose any sleep over that multiple term addition…

    • chuckula
    • 6 years ago

    Hrmm… I seem to recall estimating a die size of 275 mm^2 when the first leaked photos showed up, so 257 mm^2 is in the ballpark (actually a little smaller than I would have expected). It’s interesting to see a native 6-core die as well, so I guess the big-brother 12-core parts at ~500 mm^2 will be reserved for servers.

    That’s good news & bad news: The good news is that you aren’t getting a “crippled” part if you buy the 4930K. The bad news is that for real heavily threaded workloads you are still stuck with Xeon.

      • f0d
      • 6 years ago

      does it really matter if its a crippled cpu or not?
      SBE 6 core was a crippled 8 core and it made zero difference to its performance and thats all i care about
      sure it costs intel more to make them but they sell them at the same price as non crippled cpus anyways (see 6 core SBE vs 6 core IBE) so who cares if it costs intel more money to make it

      i actually want a crippled 12 core down to 10 or 8 cores but i know its only a dream – no need to say my expectations are too high 🙂

        • chuckula
        • 6 years ago

        [quote<]does it really matter if its a crippled cpu or not?[/quote<] Not really, although the angry posts about how Intel was "crippling" its parts were quite common when SB-E came out.

          • NeelyCam
          • 6 years ago

          Yeah; some people don’t understand economics

        • jihadjoe
        • 6 years ago

        If you still want a cripped part, there’s always the 4820k which is now a cripped 6-core.

        Intel just moved the markers out a bit. During SB-E/EP, we had:
        a fully enabled 4-core,
        a 6-core which was harvested from the 8-core
        and a fully enabled 8-core.

        With Ivy-E/EP we have:
        a 4-core harvested from 6-core
        fully enabled 6-core
        8-core harvested from 10-core
        fully enabled 10-core.
        fully enabled 12-core

        What I’m curious about is how they implemented the 12-core chip. I wonder if it is a completely new design, or a 6+6 similar to Kentsfield.

      • pumero
      • 6 years ago

      The only CPU not “crippled” is the 4960X, since it has the full amount of cores and L3 cache phsyically available on the die. The 4930K comes with 3MB less L3 cache and I’m pretty sure that at least some of them are missing those 3MB for a good reason.

        • JustAnEngineer
        • 6 years ago

        Market Segmentation seems like “a good reason” to some evil marketing geniuses.

          • ermo
          • 6 years ago

          When AMD does binning, it is “making lemonade out of lemons”. When intel does binning, it’s “evil marketing genius”.

          I get that intel has awesome fab tech. But even intel presumably produces chips which can’t quite cut it and need to have some of their transistors disabled due to manufacturing defects or simply substandard performance.

          Of course, sometimes it does make sense to take good chips and bin them lower just to move more silicon to the customer. After all, two sales of $555 each is probably still better than no sale at $999.

    • Bauxite
    • 6 years ago

    As you can see, my young enthusiast, our competitors have failed. Now witness the firepower of this fully armed and operational monopoly!

    Price at will, Marketing!

      • f0d
      • 6 years ago

      yep
      no competition from amd so per core performance has halted
      the future of cpu’s seemed better when the cpu battle was closer 🙁

      cmon amd pull another k7 out nowhere and start up the race again.!

        • Geonerd
        • 6 years ago

        Gotta wonder what the guys in CORPINT know about Steamroller on the AM3+ platform.

        (AMD’s new ‘Chirping Crickets’ marketing campaign isn’t exactly wowing the masses… )

      • Krogoth
      • 6 years ago

      +1 for being impressed by this post.

        • indeego
        • 6 years ago

        You’re losing your touch.

          • DeadOfKnight
          • 6 years ago

          Well, if he wants to kill it, he’s going about it the right way.

          If he doesn’t want it to end then he’s got it all wrong.

      • chuckula
      • 6 years ago

      So why aren’t they more expensive then? I mean, when the FX-9590 is $850 on Newegg (after a price drop), why isn’t the 4930K going for $1200 instead of being $300 less than the FX-9590???

        • cynan
        • 6 years ago

        Psssst. Because other than a few AMD fanboys with more $$ than sense, nobody is actually buying the FX-9590. AMD couldn’t sell them through system integrators and they can’t sell them now that they’ve tried to dump them in retail channels. Surprise, surprise. The 4930k is just following the same pricing scheme as the 3930k.

        Edit: And in case you need more proof that The FX-9590 is not even close to competitive at $850, there must be some reason it’s selling for [url=http://www.scan.co.uk/products/amd-fx-9590-black-edition-vishera-8-core-s-am3plus-clock-47ghz-turbo-5ghz-4mb-x2-l2-cache-8mb-l3-cac<]as low as $350[/url<] in the UK.

          • Deanjo
          • 6 years ago

          Ya I bet in a month or two you will be seeing them sold for around $250.

            • Airmantharp
            • 6 years ago

            Which is exactly what they’re worth- a slight bump in performance for a marginal increase in price. Not that that’s a bad thing, at all- there are certainly tasks that Piledriver excels at, and having a guaranteed 5.0GHz clockspeed really can’t hurt.

      • Klimax
      • 6 years ago

      What changed since SB-E on market? Oh, and no such thing as price at will, because they still want to sell them and to get us to upgrade so they are competing with past-themselves.

      • ssidbroadcast
      • 6 years ago

      IT’S A TRAP!

      • JdL
      • 6 years ago

      Monopoly schmonopoly. They are out-innovating everyone else. Plain and simple.

        • NeelyCam
        • 6 years ago

        Try saying that on S|A forums and see what happens.

        Most likely you’d get banned for trolling

          • Airmantharp
          • 6 years ago

          But they are out-innovating everyone else, at least in those markets that they choose to compete. That’s not to say that they’re dominating said markets- but they are innovating heavily.

    • derFunkenstein
    • 6 years ago

    The “see what they made of it” link on the comments page just links back to the comments page, so I can’t get back to the review without going to the front page first.

    At first I was like “oh great a bunch of NSA jokes” and then I realized exactly how much of a known quantity that Ivy Bridge is at this point so I suppose it was better and more interesting than the alternative. I liked the disgruntled [url=https://techreport.com/review/20486/intel-core-i7-990x-extreme-processor<]i7 990X review[/url<] a bit better, but still a good effort overall.

    • Chrispy_
    • 6 years ago

    You had me grinning like a small child as I read the webcam/overclocking page Scott, good job!
    I’d only just recovered from [i<]Blanda's big list of reasons to build an X79-based system[/i<] 🙂 It's nice to know X79 is still a massive waste of money for us regular folk. Hell, it's a massive waste of money for our vis team and their renders too, since anything that used to be massively multi-threaded on the CPU pretty much runs on their graphics cards these days.

    • Elusivity
    • 6 years ago

    Scott, you’ve got an error in the first table on the first page, should read ‘Core-i7-49xx’ not ‘Core-i7-46xx’.

    • puppetworx
    • 6 years ago

    [quote<]This is Blanda up in observation.[/quote<] Hahaha. For a moment there I thought I was reading 4chan. Opening the review I thought I'd just skim to the end for the conclusion (extreme series stuff has never interested me) but I ended up reading the whole thing. Touche sir.

    • JustAnEngineer
    • 6 years ago

    Did you try Haswell and the other processors with PC3-14900 memory to match what you used with Ivy Bridge-E? I wouldn’t expect a huge difference in performance, but it’s a clear difference in your test setups.

    Overall, the Core i7-4960X is an example of what Intel does when they don’t have serious competition. That 8/10/12-core CPU that we should have gotten at the i7-4960X’s $1,000 price? It’ll be $2,000 to $3,000.

      • MadManOriginal
      • 6 years ago

      Die size due to reasonable amount of cache per core is part of the reason such CPUs are expensive. I don’t know how much performance is lost if the total cache were quite a bit smaller to make a reasonable die size.

    • TwoEars
    • 6 years ago

    It is a though job staying exited about CPUs these days.

    I remember the good old days when every new generation saw a 20-50% performance increase.

    Might as well scrap the “Extreme” line at this point. Little point if it keeps appearing so late after the next gen mainstream processors already have been introduced. Maybe that was someone’s plan at intel? See how poorly the Extreme segment is doing! We have to scrap it! No one cares about Extreme processors! Well – Duh! When you introduce it 1-2 years later and without USB 3.0 support etc etc.

    • f0d
    • 6 years ago

    not an upgrade for me (already on sandy-e 2011) im hoping they will eventually release a 8/10/12 core sandy or ivy E eventually so i can encode blu rays faster as its taking 24 hours for each encode with the settings i have in handbrake
    i would love that cpu jenkins is testing as long as it isnt a xeon

    xeon is not really an option because you cant overclock them and my 5ghz sandy-e would be close to a stock clocked xeon

    hilariously funny review though 🙂

    • dragosmp
    • 6 years ago

    Yeey for the creativity, it made reading the review worthwhile.

    …slightly disappointed by the CPU and I wasn’t expecting much.

    • A_Pickle
    • 6 years ago

    I laughed at the Truecrypt results. :3

    • Jigar
    • 6 years ago

    If i remember correctly, this is second time Damage has posted a review in a different fashion. – I loved it. Thanks

    • jjj
    • 6 years ago

    Mainstream part die size for a ridiculous price, someone is being an … Intel.
    And to think they could revive the enthusiast market if this one was 300$ but why bring some life into the PC? let it rot right?

      • NeelyCam
      • 6 years ago

      [quote<]Mainstream part die size for a ridiculous price, someone is being an ... Intel.[/quote<] Sorry, but no - "mainstream part die size" these days (i.e., at 22nm) is around 160-180mm^2: [url<]http://www.anandtech.com/show/7255/intel-core-i7-4960x-ivy-bridge-e-review[/url<]

        • chuckula
        • 6 years ago

        [Psst… Hey Neely! He meant “Piledriver” when he said “mainstream” ….. once again trying to turn AMD’s huge die sizes with so-so performance and lousy power usage into some sort of benevolent sacrifice in the name of consumers or something like that]

      • Krogoth
      • 6 years ago

      This isn’t a mainstream part though. It is a server-grade that failed QA to qualified for the Xeon brand. So Intel rebranded as a high to ultra-end part for people who don’t know any better.

        • chuckula
        • 6 years ago

        So you’re saying it’s [b<]not[/b<] high-end? The benchmarks seem to say otherwise even if you want to put on the "meh" show about how it's not XX% faster than its predecessor. I'm not seeing anything else that's remotely close right now, and frankly at $550 there actually *is* a price/performance argument to be made for the 4930K for professional (i.e. not video game) workloads.

          • Krogoth
          • 6 years ago

          For professionals, it is not high-end since they would be going for the Xeon grade stuff which commands a higher premium for better TDP at clockspeed x and ECC support. The multi-socket versions go further beyond that.

            • Airmantharp
            • 6 years ago

            Krogoth is right- you have to differentiate between ‘high-end’ and ‘professional’. In this case, the product would most likely fit the ‘high-end consumer’ category because, as pointed out, it doesn’t support ECC, and the TDP ratings are higher compared to the fully-professional Xeon SKUs, indicating that they are binned lower.

            If I were doing critical work, I’d be using more sockets and ECC instead of higher clockspeeds- the difference in price is more than justified by being able to be certain in my results.

        • Klimax
        • 6 years ago

        Or who can’t afford Xeons with same specs… And I’d say, those failed chips work quite well…

        • jihadjoe
        • 6 years ago

        How does a fully enabled part fail QA?

          • Krogoth
          • 6 years ago

          It requires more voltage and wattage to operate at the same clock speed. The CPU itself doesn’t pass the more rigorous QA testing for Xeon-tier. Accuracy and stability are #1 in the professional world.

      • Airmantharp
      • 6 years ago

      For the average enthusiast playing at 1080p, this thing is way overkill. A 4770k and a top-end AMD or Nvidia consumer GPU (I’m excluding Titan here) would be far more economical, and would itself be overkill if there wasn’t an additional purpose beyond gaming for the machine that could really make use of the extra computing resources.

      I’d say, in light of their mainstream platform pricing, their ‘enthusiast’ pricing is right on target- the quad-core equivalent Ivy-E is the same price as the Haswell SKU, while the hex-core is right at 50% more- which is actually extremely reasonable pricing considering what you get.

    • MadManOriginal
    • 6 years ago

    First time I’ve seen an animated gif in a TR review. It made me think I ate something funny at first.

      • kuraegomon
      • 6 years ago

      +1 for smooth-as-silk FRIST post 🙂

      I’ll admit that I found the spook-voice premise a bit precious though. Still, I get it – Scott’s got to do _something_ to keep from being bored out of his mind after doing so many of these.

      • xand
      • 6 years ago

      Yeah so I just had a bottle of wine with lunch, and I went back to the first page to check it WAS an animated GIF.

      • flip-mode
      • 6 years ago

      Same.

      • Prestige Worldwide
      • 6 years ago

      At first I thought a colleague slipped something in my morning coffee. Good to know I’m not tripping balls.

      • derFunkenstein
      • 6 years ago

      I like that because of the completely white background, it looks like the system is dancing rather than the camera being shaky.

      • ClickClick5
      • 6 years ago

      My morning tea….wtf is in my tea?!

        • Firestarter
        • 6 years ago

        if the colours suddenly seem very bright and start to bleed into each other, then you can really start worrying!

      • Duck
      • 6 years ago

      For a split second there, I thought I was hallucinating.

Pin It on Pinterest

Share This