- Intel has abandoned desktop CPU design! Yes, I said that even though I'm eyeing desktop Z87 chipset boards right now. It's not that Intel has stopped making CPUs that fit into desktop motherboards, it's that the desktop is secondary and has been secondary for quite some time. For example, I could technically take the Samsung Exynos chips, add in extra chips to implement all the peripheral I/O I need like PCIe, and slap it onto a desktop motherboard too, but nobody would claim that those chips are designed as desktop chips either.
In the old days, Intel designed chips for the desktop, then neutered them until they could sort of be used in notebooks. That began to change with the Core & Core 2, which were actually derived from the older Pentium-M Banias/Dothan core chips due to the train wreck that happened with the Pentium 4. It continued with Sandy & Ivy and has been taken to an extreme with Haswell. Sandy & Ivy took power consumption seriously, but targeted "fat" notebooks at about 35 watts while being cut-down to Ultrabooks if needed. Haswell appears to *really* be targeted at the 15-25 watt power range with more cut-down models going for tablets... and expect Broadwell to be retargeted at even lower TDPs so that high-end tablets will run quite nicely with broadwell chips at sub-10 Watt TDPs with average active power usage under 5 watts.
You'll note that I skipped over Nehalem in the discussion above. I would go out on a limb and say that the last real desktop platform and chips that Intel designed were the original Nehalems with the X58 chipset. They were clearly designed for high-end performance and features and while they obviously weren't designed to be power hogs, power efficiency was on a performance-per-watt basis and not on an absolute power consumption basis. Oh, and as for Westmere and Sandy-E, those chips are coming from the opposite direction: instead of being scaled up desktop chips, those are cut-down server chips. Unfortunately, while they have excellent multi-threaded performance, the single-thread performance is not any better than the overclocked mobile parts, the prices are expensive, and the platforms lag behind because servers don't really care about a lot of the features that performance desktop users really want.
So what does that really mean? It means that when you buy a Haswell desktop part, you are basically buying a factory overclocked mobile chip. Don't get me wrong, you can get some very nice performance out of that chip, but at the end of the day it is still a mobile chip that is being re-purposed for a different platform. You will lose something in the translation and that is why there is so much disappointment over Haswell. Is it faster than Ivy? Sure, but not by enough to excite the desktop enthusiasts.
EDIT: And some more points
- Intel Is playing catch-up in areas other than CPU power. Part 1: The ARM threat. Intel faces threats on two fronts. The first (and by far biggest) threat is from the ARM licensees who are dominating mobile and are looking to jump out of mobile. Obviously Atom is Intel's primary competitor against ARM, but Haswell plays a very important role too. The lowest-power Haswell parts will have active power envelopes that are just a tad higher than Cortex-A15 parts (and likely have idle power envelopes that are just as good as the ARM parts). Haswell handles higher-end tablets and convertible notebooks and raises an effective barrier to ARM creeping up any higher. Of course, the upshot with obsessing over power consumption is that Haswell isn't designed with extremely high clockspeeds in mind, and it ain't getting more than 4 cores.
- Intel Is playing catch-up in areas other than CPU power. Part 2: The IGP problem. Intel's own marketing for Haswell has placed a lot more emphasis on the IGP. The highest-end derivatives of Haswell will have some pretty nice IGPs, but then again on the desktop we really just don't care. AMD owns this space right now, especially on the desktop where Trinity can run its IGP at full speed. The one piece of good news is that Intel's desktop parts pretty much all get the same IGP, and that it is not the super-highest end IGP. First, it means no more limiting the good IGP to the high-end parts where it makes even less sense, and second it means that the higher-end desktop parts are not devoting huge resources to the vestigial IGP. Of course, Trinity (and especially Kaveri) will have much better desktop IGPs than what Intel can offer. However, Intel's IGPs in mobile products will be quite strong and since Intel gets to devote more resources to the CPU, it's desktop CPU has nothing to fear from AMD (short of an upgraded AM3+ platform with server-based Steamroller chips... we'll see if that threat materializes).
The downside is this: a real dedicated desktop CPU would ditch the IGP entirely and slap in 2 extra cores. Imagine a somewhat watered-down version of the 3930K with Haswell cores, a somewhat smaller L3 cache, dual-channel memory, and fewer PCIe lanes. Probably not quite as scalable in extremely multithreaded loads, but cheaper to make and still packing more punch that a quad-core Haswell. Enthusiasts would lap up these chips BUT.... who else would? Remember that Sandy-E and Ivy-E can afford to be niche products because Intel is making a killing at making the exact same chips for servers. Where would the economies of scale be for these chips?
Basically: IGPs have lots & lots of repeated functional units. Intel is devoting more & more chip real estate and other resources to the IGP and fortunately IGP performance is improving. However, that means you aren't devoting those resources to the CPU. AMD has already voted *heavily* in favor of the IGP (mostly because that is where it has an advantage), but Intel is gradually moving towards more beefy IGPs too. Something needs to give, and it means that the CPU cores don't get crazy increases in available resources, and you don't get to add more cores either.
- Tick-Tock has messed with our expectations. Think back to the good old days when CPUs were improving by leaps and bounds. One thing we tend to forget is the gap between releases of major CPU generations since they all tend to mush together in our memories. Think about comparing a 386 to a 486. The 386 was launched in 1985 and the 486 was launched four years later in 1989. Continuing on that line, the original Pentiums launched in 1993 and the Pentium II launched in 1997. If you waited 4 years between generations, you definitely got a big performance boost in the process.
Here's the trick though: How many people on this site are eager to compare Haswell to Intel's (or AMD's) top of the line chips from mid-2009? To refresh your memory, we'd be comparing Haswell to first-generation X58 platform Nehalems (The I7-950 launched on May 31, 2009, which is almost exactly 4 years before Haswell's launch date). Now think about how many people are calling Haswell a failure because it isn't destroying the 3770K, which was launched in April of last year. The tick-tock pace has given people false memories of the past when they thought that you were getting major new CPUs every single year for the last 30 years, when this has not been the case. [EDIT: Another more recent example is the jump from P4 to Core 2. That was another very artificial gain since the P4's performance and heat output were so terrible to begin with. People just assumed that should be the standard instead of being the exception.]
In fairness to the critics, I will agree that we aren't seeing the same torrid pace of CPU speed development that we saw back during the 80's and 90's, but at the same time, there are some very strong improvements being made in CPU performance. Unfortunatley, most of the low-hanging fruit has been picked. Multi-core CPUs are great, new vector instructions are awesome. Unfortunately, both of those improvements need properly written software to show noticeable improvements, and the software has lagged badly. AMD and Intel would desperately love for every application on your computer to scale beautifully to 16/32/64/etc. cores, but it ain't happening in a pervasive way. That's not to say that there are no applications using the resources well, but there are a huge number of everday applications that do need the performance but aren't written to take advantage of the computing resources very well (the web browser I'm using right now is an example of on such application).