CPU World details Haswell package configurations

Intel’s next-generation Haswell processor isn’t due until next year, but CPU World says it has the goods on the various chip configurations we can expect. The details don’t cover specifics like cache sizes or clock speeds, but they do mention core counts, graphics tiers, and memory configs.

According to the site’s sources, there will be nine Haswell packages, at least to start. On the desktop, LGA packages will be available with dual- and quad-core dies. Both will have dual-channel DDR3 memory controllers with official support for speeds up to 1600MHz. These parts will purportedly sport GT2-class integrated graphics, a step below the GT3 GPU variant available on the mobile side.

For notebooks, the performance segment gets a quad-core die with GT3 graphics. There will reportedly be separate mainstream quad- and dual-core dies with GT2 graphics, as well. Although all three have dual-channel memory controllers, only the quads support two DIMMs per channel. You’ll apparently have to drop the memory clock to 1333MHz to run four DIMMs.

The ultrabook-sounding Ultra Thin and Light category is populated by a couple of duallies, the site says. One has dual-channel memory and GT3 graphics, while the other is a GT2 model with just one memory channel. Both are supposed to combine separate CPU and chipset silicon on the same physical package, which should shrink the platform’s overall footprint.

Intel is scheduled to discuss Haswell’s microarchitecture at its Developer Forum in San Francisco next month. We’ll be on the scene to bring you the latest on what’s in store.

Comments closed
    • Beelzebubba9
    • 7 years ago

    Is anyone actually bothered by this?

    If the rumors are true (and I have no reason do doubt them), Haswell will offer substantially improved performance over Ivy Bridge: 20% IPC improvement on the CPU side, and 200% improvement over the HD4000 with the GT3 GPU. And it’ll do this while fitting into even lower power envelopes (10w) on the low end and offering single thread performance close to a Power7 on the high (130W) end. And this using the same DDR3 memory we’ve been enjoying hilariously low prices on for years now.

    And if you need more than four cores, Intel will even sell you a 8+ core CPU with DDR4 support as Haswell-E if you don’t mind spending the extra money (and maybe losing the GPU and some single thread performance).

    In short, Haswell has the potential to be the biggest game changer in modern memory and people are still bitching because the mainstream parts available at launch aren’t GDDR5 powered 12-core monsters that cost $55 and come with their own ice cream maker.

    • Shadowdane
    • 7 years ago

    I’ll wait for the Skylake Architecture. I’m really surprised my Lynnfield i7-860 is still going strong, I really see no need to upgrade when I have this thing clocked at 4Ghz.

      • flip-mode
      • 7 years ago

      Don’t be surprised. It’s a great processor. A Phenom II X4-955 at stock speed is suiting my home computing needs just fine.

    • Farting Bob
    • 7 years ago

    Im glad i got my 2500k. I see no reason to upgrade for at least another few years. Generally i like to be able to double my performance in things i do a lot of such as video encoding, gaming etc while lowering power consumption. I have no problem waiting for a while to get that. Haswell may not offer me enough to open my wallet, although its successor may well do. By then i expect DDR4 to be available and hopefully not priced too high as well, so then i can just do a full system upgrade while im there.

      • CaptTomato
      • 7 years ago

      2500k sounds like a near perfect buy…..

      • flip-mode
      • 7 years ago

      i5 2500 is one of the awesomest CPUs of all time: performance delivered, price it launched at, power consumption, and longevity… no other CPU delivered so much for a launch price of $216.

        • UberGerbil
        • 7 years ago

        And I got mine for $189 during NCIX’s intro sale, a price I haven’t seen since (though I’ve seen $199). Money well spent.

    • Jakubgt
    • 7 years ago

    6-8+ cores would have been nice. But honestly, a quad-core haswell based CPU is a sufficient upgrade over my Q9450.

      • tfp
      • 7 years ago

      I expect a dual core haswell + HT will outperform a Q9450, not that it would be worth buying.

      • dashbarron
      • 7 years ago

      Same chip here, but I think I’ll wait for the core bump before upgrading. The “future-proof” anal logic for longevity, and all. I used the same rational for buying the Q9450 instead of a Dual Core EXXX and I’m glad I did.

    • lily880
    • 7 years ago

    Iā€˜m sorry that there are now 7billion people in the world and you are still single!check out my sites at cougarkissing _c0m!never be single!

    • mganai
    • 7 years ago

    No desktop parts with GT3? Boo.

      • willmore
      • 7 years ago

      Yay for not wasting extra die space on a thing that I’ll never use.

        • NeelyCam
        • 7 years ago

        But having the GT3 option there would be nice for those of us who don’t use discrete cards.

        • rrr
        • 7 years ago

        I’m sure Intel’s entire lineup is specifically tailored for you, random Internet guy.

          • DeadOfKnight
          • 7 years ago

          It is. If you don’t like what Intel is doing, blame willmore.

      • HisDivineOrder
      • 7 years ago

      I think I’d like to see more Optimus where the GPU shuts down when I’m doing normal things and only activates my discrete GPU when I actually am using it.

      Do you know what’s quieter than fans running silently on a modern video card? All fans turned completely off for extended periods. Do you know what uses less power than a GPU idling via Zero Cool on long idle? The video card turned off, taking next to no power and not bothering to power up 2-4GB memory dedicated only to the video card and not running any data through the PCIe bus, that’s what. Imagine if all that time your video card is running, its fans going (even slowly), imagine all that time it’s turned off instead. Now imagine the lifespan of said card versus one that’s constantly ramping up and down instead. One that turns off is probably going to last a little longer and, yeah, some of us do keep our cards for longer than a year before the new hotness comes along.

      I want the better performing integrated GPU that’ll keep the load off my discrete GPU until it’s absolutely necessary and, honestly, I wouldn’t mind it if Intel or nVidia or even AMD would give us a way to have the performance of that integrated GPU improve the performance of our discrete GPU and give us a little extra oomph in a Lucid VirtuaMVP kinda way. Except with better tuning.

      Keep giving us crappy integrated GPU’s, Intel, and people’ll keep right on agreeing with willmore on thinking that integrated GPU is a “waste of space” because it’ll be such a crappy performer no one’ll see it any other way.

    • chuckula
    • 7 years ago

    OK guys, now lets get coordinated with our Intel hate here! I have just confirmed that Intel has not dropped DDR 3, so we want you to use the “Intel forces us to stay with DDR3 and is an innovation hating illegal monopoly” rant. Under *no circumstances* should you post the following “Intel is an evil monopoly that is forcing DDR 4 down our throats at gunpoint to make an illegal profit” rant below:

    “OMG! Intel is just forcing its evil monopoly down our throats! I have 16 GB of *perfectly good DDR3 memory* and those evil Intel types will force me to throw it out just so they can make an illegal monopoly profit! Everyone knows that DDR 4 is WORSE than DDR 3 and Intel are just forcing it down our throats to make a buck! AMD would never push an immature and overpriced technology on us! We should all stop caring about performance and energy efficiency and just buy only AMD to preserve competition because only AMD innovates and gives you ATE COARZ and lets you keep your memory! Help us AMD, you’re our only hope!”

      • raddude9
      • 7 years ago

      In other words, Rambus RDRAM: sometimes, occasionally, companies learn.

      • djgandy
      • 7 years ago

      Pretty much sums it up.
      “My old hardware is obsolete, whaaaaahhhh”
      “My old hardware isn’t obsolete, whaaaah”

    • Bensam123
    • 7 years ago

    And for all the AMD BD haters, get ready to witness what happens when one company is too far ahead of the other one that they no longer need to compete.

    Two to four cores and 1600Mhz memory… More cores was supposed to be the future, what happened to six and eight cores that was supposed to pop up in the next generation?

      • tfp
      • 7 years ago

      I am so tired of seeing these posts, there is nothing wrong with people that don’t like BD it’s not good enough. AMD needs to release a CPU that people want to actually buy [Outside of the standard AMD only club]. Intel doing these kinds of things helps AMD get back in the game, be happy with that.

        • tfp
        • 7 years ago

        Come now down votes and no response? Where am I wrong?

        Is BD a good CPU and people should buy that over Intel?
        It’s a good thing to have AMD continue to release CPUs that are under performing?
        Is it not a helpful for AMD that Intel has slowed it’s development cycle?
        Would people rather Intel increase it’s development cycle to make life harder on AMD then it already is?

        • Bensam123
        • 7 years ago

        No, there isn’t anything wrong with people having a opinion about something. What’s wrong is when they start beating AMD into the ground for their under performing product, proclaiming them dead, and worshiping the one true Intel.

        This is the other side of that same coin, this is what happens when things go that way. And be it mercy buying, rationalizing it as ‘good enough’, or simply offering support as a diehard fan… AMD deserves to stay in the game… Matter of a fact, even if you’re a Intel fan you should still want AMD to stay in the game. Both sides should want each other to stay in the game.

        The second question in my first post was a semi-rhetorical response to my first sentence.

          • chuckula
          • 7 years ago

          The problem with all of your points is this: You demand that Intel innovate and give you everything you want and call them a monopoly… while giving AMD a free pass to *not* innovate and *not* give us what we want while saying that we should all love AMD for the sake of “competition”

          I hate to break it to you but “competition” is not the end goal. Getting something better is the end goal and competition is the means to the goal. You can whine about Intel all day but it is up to AMD to actually, you know, *compete* with Intel.

          Case in point: You have all these posts about how it is Intel’s (and only Intel’s) responsibility to “seed” the market with DDR4 systems and screw the consequences. At the same time, you don’t seem to have any problem whatsoever with AMD doing exactly zero on the DDR 4 front. Oh, and if Intel were to take your advice, I can just imagine the screeching we’d hear from you about how the big-bad Intel monopoly is raising prices and pulling “another RAMBUS” on poor consumers by pushing out DDR 4 before it is ready.

            • Bensam123
            • 7 years ago

            You’re misunderstanding me as well. My point was you shouldn’t want one company to end up broke and honestly BD isn’t nearly as terrible as people make it out (the efficiency is worse, but price/performance is on par).

            So when people beat AMD into the ground over something that isn’t nearly as terrible as they make it out to be, I’m going to point out what happens when they decide to try and persuade the rest of the world to never buy AMD products ever again. Your voice, everyones voice has a meaning and if you shout loud enough people start to believe you. When people start doing this and don’t realize what the end result of this will be, then they should be told what it is when it shows up.

            I believe both companies should remain innovative. That’s a completely different issue then the point I’m making here. Don’t confuse me with someone who thinks Intel should hold back so AMD can ‘catch up’. That was TFP saying that.

            • tfp
            • 7 years ago

            No it wasn’t what I was saying, I don’t think Intel “SHOULD” slow down I think they DO slow down because there isn’t market or financial reason to continue to “innovate” at speed X when a slower speed will do. At the end of the day this helps AMD, I don’t see how this is a bad thing.

            Also BD is not very good, it’s slower than PhII in a number of cases and my old Q9400 can keep up with PhII much of the time (well enough for me to not upgrade to a cost effective AMD option).

            • Bensam123
            • 7 years ago

            No it’s not dude… visit the TR benchmarks again. You can have a 8150 for $180 now too. Your 9400 performs quite a bit worse then either the Phenom2x6 or the 8150.

            [url<]https://techreport.com/articles.x/21813/19[/url<]

            • tfp
            • 7 years ago

            Worse than a Hex-core CPU? No-kidding right? Also I’m running with a 20% overclock that moves the Q9400 up just over the X4-840, not bad for anything that is not super integer based or very highly threaded. This also includes a performance benches I just don’t care about like rendering, encoding, or 7zip. This was also pointed out here, and you seem more than happy to ignore it:

            “Bulldozer’s performance characteristics could make a fair amount of sense for server-class workloads, but desktop users will probably always have to contend with some applications dominated by a nasty, branchy single main thread. In such cases, the FX chips aren’t horribly weak, but they’re sometimes no faster than a relatively cheap CPU like the Athlon II X3 455.”

            My Q9400 IS faster than an Athlon II X3 455 in (as far as I know) all cases I care about tell my why I would upgrade? I’m not running a server.

            Also that $180 is much higher as I would need to replace my, MB/CPU/RAM and I might as well do the GPU while I’m at it. As I’m not running into huge performance issues yet that can’t be fixed by updating the 9600GT I have in my PC I see no need to waste money on any platform. If I did have money to burn I would NOT by a 8150 as I can get much more performance for only 100 or so dollars more. At a system cost level it’s just not that big of a deal after including MB/GPU/ect.

            There was an article somewhere online that showed that even the lowly Quad Core2s still perform very well in most games assuming that have a good graphics card, 8150 is not enough of an improvement to care about.

      • chuckula
      • 7 years ago

      Well considering Sandy Bridge with 1333MHz RAM won pretty much every memory benchmark compared to Bulldozer running 1866Mhz RAM (go read TR’s review if you don’t believe me) it may not be that big a deal. Keep in mind that you can always overclock the memory too.

        • forumics
        • 7 years ago

        yeah well the jump from pentium to core wasn’t a big deal either, neither was the jump from SB to IB nor will the jump from IB to HW.

        its not about a single BIG performance improvement but rather incremental small improvements that brought us from pentium to HW and every step counts.

        simply thinking that i’m fine with 1333mhz ram is like saying 2gb DDR2 800mhz is good enough for everybody.

          • homerdog
          • 7 years ago

          pentium to core was a big deal

          • rogue426
          • 7 years ago

          What ? are you crazy ? The Core was a huge jump from the Pentium 4 fiasco. Enthusiasts actually started buying Intel again after Core Duo was released. AMD was charging 1k for it’s top of the line FX’s before Core Duo was released.

            • mganai
            • 7 years ago

            Really, huh? Any articles on these CPUs still online, out of curiosity?

            • ikeke1
            • 7 years ago

            [url<]http://www.anandtech.com/show/1920[/url<] [quote<]"As with all FX series processors, the FX-60 debuts at [b<]$1031[/b<] in quantities of 1000, so you can expect street pricing to be at or around that number. The FX-57 will drop to $827 mark as it will co-exist with the FX-60. "[/quote<] So with inflation that'll be ~$1150 todays dollars for dualcore top of the line CPU.

            • CaptTomato
            • 7 years ago

            It’s about increasing levels of efficiency over time, and it’s been a huge yawn on the consumer desktop since SB in jan 2010….

            • forumics
            • 7 years ago

            whatever, however big the jump was from Pentium to core it’ll never be as big as from Pentium to IB… if you still think Pentium to core is a big jump then time to pull your head out of your ass

            once again,
            its not about a single BIG performance improvement but rather incremental small improvements that brought us from pentium to HW and every step counts.

            • rogue426
            • 7 years ago

            Of course the jump from Pentium to IB is bigger than Pentium4 to Core Duo, it’s several different architectures in between the two.And yes I get your point your making about incremental improvements in process development. However to say that Core Duo wasnt a huge improvement over P4 at the time of it’s release is false.

          • BestJinjo
          • 7 years ago

          Core 2 Duo was 2x faster per clock cycle than Pentinum 4/D Netburst architecture was. E6400 2.13ghz went head-to-head against Pentium 4 Prescott or Northwood 3.4ghz in many applications and beat them badly in games. Add in overclocking to 3.2-3.4ghz for those early C2D chips, and it was the single greatest generation leap in CPU performance in the last 10 years.

        • Bensam123
        • 7 years ago

        I wasn’t talking about the performance benefits it would provide, but rather that there was no jump in memory at all, which has been a part of like every other generation of CPUs since time began.

      • I.S.T.
      • 7 years ago

      BD has IPC lower than a Phenom II. Phenom II’s IPC was roughly the same as Conroe/Kentfield.

      Unless you’re doing very specific things, BD is completely useless, and you might as well buy a Phenom II X6 if you really want to buy an AMD CPU. It even has Turbo!

      Yes, BD has AVX instructions, but by the time they become common for most folks’ usage, the CPU will be so far behind it’ll be utterly worthless. They are a bonus if you are doing earlier mentioned very specific things, though.

        • Bensam123
        • 7 years ago

        This is what I mean about beating AMD into the ground. Yet, BDs are on par price/performance. They’re even recommended as alternatives in the system guides. They don’t have the efficiency, but not everyone cares about that when it comes down to losing competition of the only other major CPU maker in the industry.

        You’re too focused on nit picking the little things to look at the overall picture in a objective format. I don’t care about AVX, I wasn’t even going to bring it up, because just like you said it doesn’t matter.

          • I.S.T.
          • 7 years ago

          …Little things? In most tasks that everyday people use, it’s frigging slower than AMD’s previous model(The X6es, not the X4s)

          It’s Pentium 4 Williamette all over again. Time will tell if AMD can fix the most obvious flaws and perhaps pull a Northwood or something similar out of it.

          I mean, it’s only competitive if you’re doing super threaded stuff that is also mostly integer based.

            • Bensam123
            • 7 years ago

            It’s not…

            [url<]https://techreport.com/articles.x/21813/19[/url<] See how all the hatred and preconceived notions are warping the view of the processor? The 8150 is competitive with a 2500k and it's cheaper by about $30 for the none K and $40 for the K. The only part of it that is not competitive is the power efficiency.

            • I.S.T.
            • 7 years ago

            And to you I link

            [url<]https://techreport.com/articles.x/21813/7[/url<] Look at how it compares to the Phenom 2 X6 1100T. And from the same article: [url<]https://techreport.com/articles.x/21813/8[/url<] In this case just barely any faster than the 1100T for the most part. In one test, slower. [url<]https://techreport.com/articles.x/21813/10[/url<] ever so slightly faster again, except in the last test where the minimum FPS is lower than the 1100T and the maximum is higher. I prefer going by minimum FPS if I'm buying a card based off of nothing but FPS numbers, but your milage may vary. šŸ˜› Oh, and the power needed? [url<]https://techreport.com/articles.x/21813/16[/url<] Idle is lower, yes, but the load power is higher. For something roughly the same speed. And for the record, I'll state my opinion as BD as far as the TR review show sisn't quite as bad as I remembered, but unless you're doing super threaded stuff, there's no real reason to use it over a Phenom II as long as the Phenom II's got a decent enough clock speed(3.4 GHZ or higher). It'd be a waste of 189 or so dollars. Hell, the model with that clock speed costs 109 dollars at newegg: [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16819103727[/url<] Buy a BD compatible board, and you'll save yourself roughly 70 bucks and be future proof for when AMD finally puts out a good new CPU. I just don't see why the [i<]average[/i<] person needs a BD over previous model AMD quad cores or current model Intel quad cores.

            • Bensam123
            • 7 years ago

            So… you took the overall conclusion and split it up into what the article said?

            The 8150 may be slower in some scenarios, but overall it’s still on par with a 2500k and still faster then a X6. It still costs $30-40 less then the 2500k.

            Over previous P2s, because it’s faster overall. When comparing to the 2500k, because it’s cheaper. When looking at the overall picture because the price/performance is on par and it’s a bad idea for Intel to be the only game in town.

            • I.S.T.
            • 7 years ago

            Except you seem to be missing the fact that the super threaded apps, which IMO Tech Report is too heavy on in its CPU reviews, are what is contributing to such a high, high price performance level. If it wasn’t for them, it’d be a fair bit lower.

            • chuckula
            • 7 years ago

            Not to mention that if your workload truly is super-threaded and it is really important to have the best multi-threaded performance there is always the 2600 and 2600k, which Bensam conveniently forgets to mention when talking up the virtues of Bulldozer….

            • Bensam123
            • 7 years ago

            They also cost more. I never mentioned the ‘virtues’, I was talking about price/performance.

            Thank you for reminding me of things I never said though.

            • rrr
            • 7 years ago

            Except they cost quite a bit more, especially with recent AMD’s price cuts.

            • Bensam123
            • 7 years ago

            Yet, they’re still apart of testing the processor. Not everyone is after a processor to play games or even certain ones that it specifically performs worse at.

      • vargis14
      • 7 years ago

      For a stop gap measure it would be nice to see AMD Die shrink the True 6 core Thuban down to 28nm , Throw some memory controller improvements, maybe a 3rd channel and some added instructions.
      I think a overclocked 28nm 6 core thuban running at 4.5ghz+ would beat a 2500k and at least get people interested in AMD again.

      For the longest times i have been wondering how a 28nm optimized 6core thuban would perform. Something tells me it would grab alot of people attention including Chipzilla.

      As for the sandy to ivy bridge TOCK, Intel did not solder on the IHS to the ivy die like on sady bridge dies on purpose. They did not want 5+ghz overclocks on ivy bridge cpus to take away from Sandy bridge-E 2011 platform sales.

        • I.S.T.
        • 7 years ago

        It’s only beat it in super threaded tasks, vargis. Even if you mildly enhanced it like you describe(Though yes AMD’s memory controller is not on Intel’s level and needs big improvements. BD actually went a fair ways towards that in terms of the memory controller, then screwed up the rest of the memory sub system. Seriously, fix the caches and you have an easy 5% or more IPC improvement, depending on how deep the fixing goes), in lightly multi or single threaded tasks a Core i5 2500K would dominate a Thuban.

        Sad how AMD is behind. I was really hoping BD would be roughly Nehalem level of performance overall, and yet that did not happen…

          • vargis14
          • 7 years ago

          a hot clocked 28mn optimized thuban? its all theoretical you know:)

            • Bensam123
            • 7 years ago

            You know there aren’t a lot of people that can do hypotheticals.

      • PixelArmy
      • 7 years ago

      You are right! We should tell Intel to take time off from making CPUs… This will definitely speed up advances in CPUs! Brilliant!

        • Bensam123
        • 7 years ago

        Completely misunderstood my point.

      • flip-mode
      • 7 years ago

      Can we hate AMD for the mere fact that they’re so FAIL at competing? You know, it’s not Intel’s fault that AMD can’t seem to bring a decent chip to market.

    • chuckula
    • 7 years ago

    Hrm… more pondering on this info. Of course, all of this is subject to change:

    1. Good news for AMD: Desktop Trinity is safe from being beaten by Haswell at IGP in any meaningful way. Of course, this isn’t the greatest victory when even inexpensive discrete GPUs will clobber both solutions, but in the HTPC niche I’d expect 65 watt Trinity to be very successful. Haswell with GT2 graphics will still be a step up from Ivy Bridge, but nothing earth shattering.

    2. Good news for Intel: GT3 Haswell will outperform mobile Trinity by a solid margin and will hold an especially strong lead in Ultrabooks. TR still hasn’t gotten the 17 watt Trinity parts in for review but all the benchmarks I’ve seen show that the 17 watt Trinity is about on par with the HD 4000 in Ultrabooks. Haswell’s greatest jump will be in the low-power parts and I doubt that even Kaveiri will fully close the gap at < 20 watt power envelopes (at higher power envelopes Kavieri should beat Haswell though).

    [Edit: I’m going to take the fact that there are no responses to this post and downthumbs to indicate that both the Intel and AMD fanboys have run out of rational arguments and are just trying to shout down anything that doesn’t agree with how they wish things were vs. how they really are. It’s really sad when even a post that shows advantages for each side is downthumbed because it isn’t “good enough” for their irrational fantasies.]

      • dpaus
      • 7 years ago

      [quote<]Edit: I'm going to take the fact that there are no responses to this post... to indicate that both the Intel and AMD fanboys have run out of rational arguments[/quote<] That's certainly preferable to contemplating the possibility that they consider your opinion irrelevant.

      • flip-mode
      • 7 years ago

      Response to attention-craving-chuckula: “Response unnecessary.”

      Edit: A lack of responses and / or a lack and / or existence and / or abundance of positive or negative score will be interpreted to mean that I have conquered the irrational Internets.

        • chuckula
        • 7 years ago

        Mission accomplished! Back under the bridge now until the next article.

          • dpaus
          • 7 years ago

          Say ‘Hi’ to NeelyCam for us….

            • chuckula
            • 7 years ago

            somebody needed to take up the slack for him since he’s on vacation.

      • DeadOfKnight
      • 7 years ago

      [quote<]Trinity is safe from being beaten by Haswell at IGP in any meaningful way[/quote<]I wouldn't say that quite yet.

        • chuckula
        • 7 years ago

        Don’t forget that I used the word [b<]Desktop[/b<] at the beginning of the sentence. AMD's IGP philsophy is to optimize the GPU for best performance on the desktop and then scale down to mobile. Intel's philosophy seems to be for the GPU to work best at about 35 watts and then scale down for Ultrabooks and scale up for desktop. The end result is that Intel's hardware is probably operating at its peak in full-sized notebooks, with desktops not seeing huge scaling gains while Ultrabooks (hopefully) don't have huge scaling losses. If the rumors are true that desktops only get GT2, then beefier versions of Trinity (especially the 100+ watt TDP jobs) will still win on GPU. Now, on the CPU front Trinity will be slaughtered, but that is sort of a given.

    • Shambles
    • 7 years ago

    Remember back when the Core 2 CPUs came out how great they seemed. Remember how back then we thought for sure that by 2012 the base model CPUs would be quad cores with the mid-range and high end stuff being anywhere from 8-12 cores?

    Thank goodness for monopolies putting that dream to bed.

      • chuckula
      • 7 years ago

      [quote<] Thank goodness for monopolies putting that dream to bed.[/quote<] What are you talking about? AMD will sell you an AYEET COAR processor for less than $200. More seriously, my Dad likes to do folding as a hobby and he recently got a non-OC'd non-K series desktop Ivy Bridge processor that triples the score of his 2008-era core 2 quad. I wouldn't call tripling the performance in 4 years some sort of disaster.

        • bcronce
        • 7 years ago

        It’s not Intel’s fault 8-core or higher isn’t popular, most developers still don’t understand basic multi-threading. Multi-cores use more power, cost more to make, and spend most of their time idle.

          • I.S.T.
          • 7 years ago

          Plus, not every problem can be highly multi-threaded like that…

      • MadManOriginal
      • 7 years ago

      The problem isn’t CPU advancement, it’s people who think blindly throwing more cores into CPUs is the best solution for everything.

      • Majiir Paktu
      • 7 years ago

      Architectural improvements are much better (and much more difficult) than just adding more cores.

        • Kurotetsu
        • 7 years ago

        This.

        A launch Core 2 Duo from back in 2006 or so would get annihilated in every that mattered by Intel’s dirt cheapest Celeron dual core that costs LESS money than what that Core 2 Duo cost back in the day. You’re getting more performance for less money. That is incredible progress.

        The people who legitimately need 8-12 core CPUs (read: the people who have software THAT ACTUALLY USES THAT MANY CORES) have that + the extra cores. Even more progress.

        Oh, but wait, we don’t have 20-core CPUs for $99, sitting in consumer boxes that’ll MAYBE use only 1/5 for those cores for anything. So it sucks.

          • tfp
          • 7 years ago

          Sure we do it’s called a low end GPUs, they can be found in most desktops and if all you do is facebook your not using the HW.

      • StashTheVampede
      • 7 years ago

      How many “consumer” apps are using four / eight / twelve cores. Seriously, give me an application that 98% of the computing population will use on a daily basis that will utilize all of those cores.

      If you’re dealing with things like databases, CAD, 3D modeling, programming and a “few” other items, than all of those cores are pretty worthless for your day to day needs. Intel understands this and is making sure they balance the desire for cores, power consumption and IPC.

      For the users that need/want the cores, go with the Xeon platform — it has what you need in several socketed configurations.

        • heinsj24
        • 7 years ago

        So long as I am using a multi-tasking operating system, I’ll take as many cores as I can get.

        You don’t need to run that one monster application to take advantage of multiple cores – you could just… you know… run many applications to find more cores useful. If multiple cores were not useful, we’d all be using Celerons.

          • Kurotetsu
          • 7 years ago

          From my personal experience, multi-tasking responsiveness is affected more by your main memory and persistent storage. Modern operating systems are REALLY, REALLY good at managing CPU time for all the processes that are running on your system (seriously, they’ve had decades to get it right). The main point where responsiveness takes a hit is when those processes are waiting for data to come back from somewhere, thats where your main memory and persistent storage comes in. Throwing more cores into the mix doesn’t fix that problem right now (modern CPUs haven’t been a bottleneck to responsiveness for a LONG time, if they ever were).

      • djgandy
      • 7 years ago

      Well since you are so smart you can get to work parallel-izing all the applications out there you think should use multiple cores. First off I think you should make the cursor blink in MS word multi-threaded.

      Fact is the average user has little use for more than dual core and faster single threaded performance is more important. Many tasks are still very serialised due to dependencies in order of execution so huge multi-threading gains are just not there to be had.

      People who really need multi-core machines will often have multiple machines. You know running web servers, build systems, compiling etc. All nice data crunching scenarios that lend themselves to running lots of processes, but on a per process basis still very serialised.

      Take something like a web browser, how far do you think you can multi-thread it easily? You’ll probably find your multi-threaded solution will end up more computationally complex than just executing in serial.

      Of course if you open ten tabs in a modern browser, ten cores will render those tabs quicker, as they are ten distinct tasks, but are you then going to tell me you can read all of those tabs quicker than a dual core CPU can render them?

    • lycium
    • 7 years ago

    Dual channel memory at 1600MHz? 2008 called and wants its desktop PCs back…

    The only remotely exciting thing in there seems to be the upgraded GPU / vector processing units.

      • JrdBeau
      • 7 years ago

      Agreed. Disappointing to see those max ram numbers too.

      • CaptTomato
      • 7 years ago

      State of CPU development is a sick joke lately.

        • csroc
        • 7 years ago

        Looks like I don’t really need to care about waiting for Haswell at this point. I wanted to build a new system in the next year, maybe I’ll do it sooner rather than later.

      • chuckula
      • 7 years ago

      Why is it that the same people who whinge about dual-channel memory configurations are:
      1. The same ones who whinge that quad-channel motherboards cost more; and
      2. Whinge that overclocking memory is a waste of time.

      You want DDR4? Sure you *say* you want it.. but do you want to PAY for DDR 4 in 2013 on a consumer-grade product? Yeah, didn’t think so.

        • Shambles
        • 7 years ago

        What’s a whinge? Does it help doors rotate?

          • FranzVonPapen
          • 7 years ago

          Ditto, how did “whinge” ever become a common mistake for “whine”?

          Have these people never tried saying the word? Just how does one “whinge”?

            • lycium
            • 7 years ago

            Common mistake? The world doesn’t stop at your borders mate…

            • djgandy
            • 7 years ago

            Whinging is a term used in the UK. It is often used to describe what children do when they want something and often includes a noise that is lower pitched than a whine after they have begged for a new power ranger toy and realise they aren’t going to get it.

          • 5150
          • 7 years ago

          No, it’s what Karl Pilkington does all day.

        • lycium
        • 7 years ago

        Whinge? I do 3D rendering, memory subsystem is crucial. I’m neither of those things you listed; what else do you presume to know about me?

          • chuckula
          • 7 years ago

          If you are doing 3D rendering for real then you are already using a 1 or 2 socket Xeon with 4 or 8 channels of RAM as it is and this announcement is irrelevant. It’s like an 18 wheeler driver whining that a new Prius will never haul a 20 ton load… duh.

          If you aren’t doing 3D rendering for real then you should be *very* happy that Haswell will be using affordable memory technology because the new AVX enhancements in Haswell should be very favourable to 3D modeling types workloads and you’ll see solid gains over Ivy Bridge.

          Either way your original rant doesn’t make sense.

            • lycium
            • 7 years ago

            How does it not make sense, and why can’t I expect desktop CPUs to have a wider path to system memory? This matters especially much with that on-chip GPU / vector coprocessor. This is where the real action is for 3D rendering workloads, and it is often memory bandwidth limited.

            I’m using a triple channel i7 920 from 2008 as my primary dev box, and it’s kept up well – a little too well, hence my initial post.

            Does it make sense?

            • chuckula
            • 7 years ago

            No it really doesn’t because if you are serious about 3D modeling you are not in the target market for a desktop/mobile grade Haswell. I’m sorry that it upsets you that Intel isn’t artificially driving up the cost of consumer-grade hardware with DDR 4 or by adding extra complexity & power draw by jumping to more channels of RAM.

            There already is an upgrade path for you: it’s called LGA-2011 be it desktop (SB-E) or workstation (2-socket Xeon). It’s got so much bandwidth that it’s a little bit insane, and the 6/8 core solutions will annihilate a standard Nehalem chip in 3D rendering. Socket 2011 exists even though for some reason you refuse to acknowledge its existence.

            As for your 920 rig, please show me reputable 3D rendering benchmarks where it comes out ahead of Sandy Bridge or Ivy Bridge… your story sounds fishy. If you had said that you had a 6-core Westmere like the 980x then I’d be inclined to see how only a quad-core Sandy Bridge wouldn’t beat your old setup, but you seem to be bent out of shape that Intel isn’t catering to your every whim in a form factor that isn’t intended to be used with EATX motherboards, full towers and 1200 watt power suppliers. Get over it and I’ll gladly buy you beer if you can show any valid 3D modeling task where Haswell doesn’t clean up over your i920.

            • lycium
            • 7 years ago

            I guess we’ll have to see what the workstation/server Haswell chips have in store.

            The i7 920 is doing quite alright, at 3.2GHz, compared to the newest CPUs. Of course the mighty LGA-2011 platform will spank it in memory bandwidth, and it’s true that this is where you really want to be for a 3D rendering box šŸ™‚

            • chuckula
            • 7 years ago

            That’s a fair argument. Unfortunately the workstation version of Haswell is not going to be out for some time and that is a *very* valid point to throw at Intel since they are really dragging their feet in the workstation market.

            • lycium
            • 7 years ago

            I’d say this stagnation in memory speed is about half of why Intel currently have the best arch for rendering: they have the best memory subsystem / cache hierarchy (the other half is that they have great IPC and FP throughput).

            AMD’s CPUs seem to be focusing on server / integer performance, while their GPU cores are quite power efficient and powerful in the GCN arch.

            • Bauxite
            • 7 years ago

            Look around for cheap used 6 core 9x0s to tide you over even longer, they are out there.

            During the hype of the SB-E launch I swapped a 920 for a 970 for $100, nice upgrade for me, cash for someone that was making an entire new main system but still left them with a functional secondary.

            • esterhasz
            • 7 years ago

            Just to wrap it up, is there a scenario where someone does 3D renderering AND faces monetary constraints or is this ecluded by the “for real” clause? If yes, would that person be entitled to complain on an Internet message board?

        • Bensam123
        • 7 years ago

        You have to seed the market before DDR4 becomes cheap enough for those ‘whinge’rs to purchase it. You cannot seed the market without actually presenting the product to the market. This is like people saying 10gig is viable in its current form to consumers.

      • Krogoth
      • 7 years ago

      #1 – There’s no demand for it in the desktop market. Just name a mainstream application that has a genuine need for triple and quad-channel DDR3.

      #2- Cost, it cost more to add the additional tracings needed for triple and quad-channel DDR3. It also requires more pins on the CPU packaging. That’s why platforms that have it cost more.

        • Bensam123
        • 7 years ago

        Dude if current ‘need’ starts driving CPU development we’re in for a whole lot of hurt. You shouldn’t be thinking ‘No one needs this, so there is no reason to develop it’, you should be thinking ‘How far can we really push this’.

        One drives innovation and one drives a monetary reward system. Lets see if you can figure out which is which.

          • Malphas
          • 7 years ago

          [quote<]You shouldn't be thinking 'No one needs this, so there is no reason to develop it', you should be thinking 'How far can we really push this'.[/quote<] There isn't a single industry that works in that manner.

        • faramir
        • 7 years ago

        AMD’s APUs are clearly limited in their graphics performance by the avaliable memory bandwidth – performance scales up nicely with bandwidth increase (due to higher RAM operating frequency). While games and GPGPU applications might not seem mainstream enough to you, they nevertheless do exist and are going to be even more important in the future. As Intel is pushing forward with their GPU technology, they are bound to hit the bandwidth bottleneck at some point as well.

        If they don’t want additional chip and board real estate wasted on extra pins they should be working on utilizing existing setup better – by increasing memory controller frequency and/or by adopting even more efficient transfer schemes.

    • kamikaziechameleon
    • 7 years ago

    Not very exciting information, IMHO.

      • juampa_valve_rde
      • 7 years ago

      Where is Krogoth?

        • Arclight
        • 7 years ago

        Nobody knows, but i feel his disappointment.

        • Mourmain
        • 7 years ago

        He’s being treated for disappointment overdose.

Pin It on Pinterest

Share This