Report pins Ivy Bridge launch on April 8

For a while now, all signs have pointed to a spring launch for Intel’s 22-nm Ivy Bridge processors—but the spring is a pretty broad time frame. Thankfully, the rumor mill is able to offer a more precise (though unofficial) estimate.

Quoting unnamed sources at Taiwan’s PC makers, DigiTimes reports that Intel has set an April 8 release date for its next-generation processors. That date will purportedly see the arrival of no less than 25 new chips; 17 of them will be aimed at desktop systems, while the remainder will target notebooks and ultrabooks.

DigiTimes lists a handful of model numbers for desktop Ivy Bridge CPUs to be priced between $184 and $332. However, past reports from other sources have provided more thorough descriptions of the desktop Ivy Bridge family. You’ll want to check out this story at CPU World for more details.

Comments closed
    • Wirko
    • 11 years ago

    I just wonder why Intel didn’t keep those marketing friendly round model numbers. I’d feel so much better with a chip with i7-3700 written on it rather than i7-3770.

    • cygnus1
    • 11 years ago

    and I think you missed my gist that anyone complaining about ‘first world problems’ such as a new CPU not coming out fast enough for them; reminds me of a whiny, little, spoiled child.

    edit: also, i take offense at him implying that whiny, little, spoiled children only exist in the first world.

    • lycium
    • 11 years ago

    I think you might misunderstand the gist of it; it’s not “All people in the first world have this problem,” it’s “Only people in the first world have this problem.”

    • cygnus1
    • 11 years ago

    See, I don’t like lumping together everyone in the first world that way. Plenty of people in the first world don’t whine about such things. So when I hear people whining about ‘problems’ like that, I pat them on the back and tell them their princess problems will be OK and ask them if they need assistance getting the sand out of their vag.

    • cygnus1
    • 11 years ago

    If they can get that IPC up and they get a nice scheduler patch from MS, they might get my money. I really don’t like how difficult Intel makes it to get an inexpensive CPU with all the virtualization features enabled, whereas AMD leaves the features enabled on pretty much every CPU they sell.

    • ish718
    • 11 years ago

    Yes, but the FP scheduler is shared and the L1/L2 cache is slow

    • BobbinThreadbare
    • 11 years ago

    [quote<]If you assign two intensive threads to a module and those two threads want to use the FP unit, you will have a bottleneck right there alone.[/quote<] Only if they are both using nearly brand new instructions for double precision. It can handle 2 normal precision instructions at the same time. Of course most games don't do much FP math outside of the video card anyways.

    • bcronce
    • 11 years ago

    Metro and Crysis bog the GPU by having horrible pixel shading code that doesn’t take advantage of DX11. BF3 is the only game that you mentioned that one could consider “GPU” limited, but like you said, it requires cranking up the quality. But with that definition of “bottlenecked”, one could always claim the GPU isn’t fast enough. You could have the fastest GPU that could be built in this universe and make it slow by cranking up the graphics too much.

    If you take the top 25 played games, set them to high(not ultra), 1080p, and AAx2, most will be limited by the CPU(threads).

    • ish718
    • 11 years ago

    Bulldozer is different though. If you assign two intensive threads to a module and those two threads want to use the FP unit, you will have a bottleneck right there alone. Bulldozer should be treated as a quad core with physical hyperthreading.

    Now assign those same two threads to two cores in a phenom II x6 and you will get better performance because of the dedicated resources.

    You have physics, AI, rendering, data decompression, etc.. in separate threads.
    It depends on how intensive this threads are, whether it’s worth it to put them on separate cores and suffer synchronization overhead.

    • Theolendras
    • 11 years ago

    If I recall correctly they’re hoping for 15 % higher IPC. If they can squeeze a little better frequency as well which might be possible as Global Foundry SOI 32nm still seems to be a work in progress, it might* add up and be a decent alternative.

    • Theolendras
    • 11 years ago

    Hopefully yields will get better, because I suspect they are selling it higher than they should be because they just can’t produce enough of them.

    • Theolendras
    • 11 years ago

    I think you miss the point. bcronce seems to point to corner that were either compromise to get out to market or process issues that wasn’t fully mastered by AMD enginneers. It fully works, but maybe was out before being mature enough in some areas.

    Well anyway, it’s tough to compete for AMD they have enormous pressure to deliver on a timely basis with a widening gap on process and architecture front with a tenth of the R&D of your competitor. Not really hard to see the are sometime constrainst to get products out. As to marketing, I don’t care, I can read graph without marketing guidance.

    If you remember correctly first Phenom weren’t great either with TLB bug and all. Phenom II delivered somewhat but the retirement is long overdue. Some would say it was broken, but I would mostly agree with you the ecosystem as a whole and price is what matters most.

    • OneArmedScissor
    • 11 years ago

    This is going off on a tangent. I never said a dual-core is better. I even said in the very post you are replying to that there isn’t something wrong with a quad-core.

    But add more cores, [b<]and use them[/b<], and your deficit from the max clock speed continually shrinks because very simple tasks that one core could handle end up spread out amongst all of them - but not split into pieces to make a balanced load, which was the subject. This isn't an argument. It's reality. Look at Bulldozer. It actually does function better with everything assigned to fewer cores.

    • UberGerbil
    • 11 years ago

    [quote<]t can be a bit of a ruse. You look at the task manager and see one core maxed out, one almost maxed out, one with a mid size load, and one with a small load. Great, all four cores are being used! But all that's really accomplishing is limiting your clock speed, and that just gets worse as more trivial things are moved to an even greater number of cores.[/quote<] Except that the clockspeed difference between having three cores asleep and all four active is only a couple of multiplier bumps. If you artificially affinitized all the threads to the same core you'd generally see a [i<]decrease[/i<] in performance unless you heroically overclocked the chip. And there are other benefits to having those cores available, even if they don't show up in task manager: having an "under-utilized" core ready to process interrupts and other asynchronous tasks can make a difference in both the perceived responsiveness of the game and a real reduction in lag for net code -- and that's "free' parallelism you get from the OS, over and above anything that might be implemented in the game or its libraries. I agree that there's diminishing returns from increasing cores, but that's not news. Nor is it news that very few tasks are embarrassingly parallel, or that most games (or any other interactive applications) tend to have one or (maybe) two "heavy" threads that end up serializing the overall "speed" of the application, so the diminishing returns situation is unlikely to change (Amdahl's Law is a harsh mistress). But given that many recent games are implementing at least a couple of threads now, and there are other tasks going on in the system (some on the game's behalf), I'm not sure you can convincingly argue that a highly-clocked dual-core is still the sweet spot -- especially given that Intel's designs distribute the L3 cache amongst the cores. At least, I haven't seen anybody out there crazily overclocking 2C i3s in an effort to get better gaming performance.

    • BestJinjo
    • 11 years ago

    That’s not true. Most games are GPU limited, not CPU limited. Your 6950 is for sure the bottleneck with an i7, not the CPU. Just check out any HD7970 review. HD7970 is at least 40% faster on average. If most games were CPU limited, then the performance increase would be minimal since we’d need to wait for much faster CPUs to actually allow the hardware to stretch. However, that’s definitely not the case as HD7970 brings a large performance gain with today’s CPUs.

    Also, you can bottleneck the GPU (always), by increasing image quality such as adaptive AA, depth of field (Metro 2033), tessellation (Crysis 2), cranking super sampling or MSAA + FXAA (BF3). If your i7 is bottlenecking your 6950, that’s probably because you are playing old games, not modern games, or perhaps gaming at a very low resolution such as 1280×1024.

    • BestJinjo
    • 11 years ago

    Obviously, for gaming upgrading the GPU is almost always a better upgrade, especially since you are already using a 4.3ghz i7. But for someone who is buying a new system, it’ll be a nice 20% boost, if not more. IVB will be ~ 6% faster in IPC and because it will be on 22nm, it’ll likely overclock to 5.3-5.4ghz (or more).

    5.3-5.4ghz * 1.06 IPC vs. most current SB i5/i7s that do 4.7-4.9ghz = 17-22% faster depending on how good of an overclocker IVB will be.

    The added power consumption reduction, native PCIe 3.0 lanes on the CPU, native USB 3.0 off the chipset are all nice bonuses. I doubt a lot of people will be upgrading from 2500k/2600k though.

    • UberGerbil
    • 11 years ago

    Yeah, that’s my interpretation too. The TDPs aren’t the actual power requirements of the chips, they’re the thermal load design points that the OEMs use when designing systems — and those design points haven’t changed in some time. These are the “buckets” that the chips drop into, and as long as they stay within those buckets the OEMs don’t have to go back and redesign things to accommodate the new generation. Given that the design problems get exponentially more difficult (ie expensive) as the load goes up (and are accompanied by added considerations of things like noise), there’s a lot more benefit to dropping the top bucket a few watts (or creating a new bucket below the top one) than in reducing any of the lower-power ones. If the mid-range models actually run a few watts cooler overall, it’s a nice bonus, but it’s not worth introducing a new design point for.

    • OneArmedScissor
    • 11 years ago

    Yes, but I already pointed that out. Some games are just more CPU intensive than others, largely dependent on whether or not the AI is demanding. For example, Supreme Commander can kill a few cores, but that’s because both the rendering and AI are demanding.

    However, that does not mean they are actually taking advantage of parallel processing. In fact, because they weren’t is exactly why the game would get so bogged down.

    It can be a bit of a ruse. You look at the task manager and see one core maxed out, one almost maxed out, one with a mid size load, and one with a small load. Great, all four cores are being used!

    But all that’s really accomplishing is limiting your clock speed, and that just gets worse as more trivial things are moved to an even greater number of cores.

    I’m not saying there are no benefits to a quad-core, but why standardizing anything beyond that is problematic for PCs, and how it’s directly attributable to “multi-threading” in lieu of actual parallelism

    • ronch
    • 11 years ago

    AMD has about 4 months to prove to the world that Bulldozer is not a hopeless project and GF can fix their fluke-based 32nm process. Or, they have 4 months to think about drastically cutting FX prices.

    • OneArmedScissor
    • 11 years ago

    The biggest “bug” of all is with people jumping to conclusions. We don’t really know what the core itself is capable of in a PC context. They can use it any number of ways, but thus far, they’ve only shown it to us in what is effectively low power, multi-socket server form.

    Was AMD either misleading or possibly just outright stupid to market that as a high end desktop part? Yes. Does that mean the core itself is broken? No.

    You don’t buy a Piledriver core, you buy a Trinity chip. It’s like Arrandale compared to Gulftown. There shouldn’t be any expectations of worse or better, only waiting to see what in the world this thing actually is.

    • bcronce
    • 11 years ago

    I’m not sure if these are rumors, but I heard BD has a lot of performance “bugs”. So the “minor improvements” may be based on if BD was working correctly in the first place, but if PD not only fixes the performance bugs a long with the “minor improvements”, there may be a large difference.

    I’m hopeful. I love competition.

    • Farting Bob
    • 11 years ago

    TDP of CPU’s mean nothing more than “your cooler has to handle at most this much”. In reality they are all very conservative estimates. Go check out a good review that does an accurate power measurement of the whole system. You’ll see that one 95w CPU will use 20-30w more than another 95w CPU under stress. Its useful for OEM’s but not really for enthusiasts who do their research.

    • mesyn191
    • 11 years ago

    Even AMD’s slides suggest Piledriver is a minor improvement over Bulldozer. They need huge improvements to just meet Sandybridge much less Ivybridge.

    If the drop the price on Bulldozer enough it could offer some value vs Intel but that is about the best you can realistically expect.

    • ish718
    • 11 years ago

    What you are stating is application dependent.
    Sure most games still need only 2 cores but there are certainly games out there that do take advantage of 4 cores.

    • Peldor
    • 11 years ago

    I take it to mean that Intel sees a benefit in redefining the top-end desktop TDP to 77 W. Primarily it creates a differentiation point with AMD’s offerings, but it might fit in a few more designs that 95W wouldn’t.

    Redefining the 65W wouldn’t gain you as much IMO.

    Keep in mind the TDP rating on top-end Sandy was 95 W but it didn’t actually use more than about 85 W. We’ll have to wait and see what the real consumption of Ivy Bridge is like relative to the 77 W TDP. It’s possible that Intel has narrowed the gap a bit between real-world and rating.

    • FuturePastNow
    • 11 years ago

    There should be Sandy Bridge-E Xeons with all eight cores early next year which *I think* will work in consumer X79 boards. But hopefully IB-E will lead to consumer octo-cores.

    • OneArmedScissor
    • 11 years ago

    The 65w quad-cores are higher clocked than with Sandy Bridge. We also don’t know anything at all about how the GPUs are configured for i5s vs. i7s.

    • OneArmedScissor
    • 11 years ago

    They’re not going to actually split the load on the particularly demanding threads between cores, though, which is a key difference between something pseudo-multi-threaded and an actual parallel task.

    The game might be capable of distributing a few things to up to 8 cores, but that just means a few cores are going to get something like the audio and not much else. And then one core is hammered with the rendering or AI, whatever is most demanding in the particular game.

    This is where complex systems of boost modes with gajillion core CPUs get you in trouble, like what happened to Bulldozer. It’s generally still better for maybe two cores to run as fast as possible.

    • Frith
    • 11 years ago

    I’m still confused as to why the TDP on the mid-range CPUs has remained at 65W. Since it’s dropped nearly 15% on the higher end models you’d have expected the mid range CPUs to have come down to around 56W, but it’s the same as Sandy Bridge.

    Anyone know why this might be?

    • ish718
    • 11 years ago

    Huh? I am certain most modern games now utilize 2 or more threads.

    • bcronce
    • 11 years ago

    My 6950 is CPU bound for most games because most games are still single threaded. I usually see one of my 8 threads on my i7 running near 100%(~11% of the total cpu), and my GPU around 20%-30% load.

    Technically, I’m not CPU nor GPU bound, but thread bound. The only way I can gain performance is to increase single thread performance.

    • kamikaziechameleon
    • 11 years ago

    So when will we see ivy bridge-E so we finally get our consumer 8 core processors???

    • bcronce
    • 11 years ago

    I also wouldn’t mind starting a server for my home storage. AMD at least fully supports all of the virtualization features.

    • Theolendras
    • 11 years ago

    I’m interested and relatively hopeful about piledriver the will be closer to Intel by that time. But I really wouldn’t have hope for performance parity on single thread performance. I think best case would be better multithreading performance, equivalent light multi-thread and a notch or two below in single thread performance. Probably a better choice as an upgrade path than starting a new system on for many people.

    • bcronce
    • 11 years ago

    I’m looking forward to a more power conservative CPU with stronger single thread performance than my aging i7-920. IB or SB will be good enough, but hoping to get an IB.

    Heck, if AMD fixes a lot of their problems with their next revision(Piledriver) that is rumored for Spring also, I wouldn’t mind helping them, assuming it’s competitive.

    • yogibbear
    • 11 years ago

    Me too!

    • Firestarter
    • 11 years ago

    Crap, I want to build something around the HD7950, waiting for April 8th would be the height of First World Problems 🙁 🙁 🙁

    • lycium
    • 11 years ago

    One Core i7-3770 for me, please!

Pin It on Pinterest

Share This

Share this post with your friends!