Leaked slide outs Ivy Bridge-E models

Haswell isn’t the only new desktop processor Intel is set to release this year. Ivy Bridge-E is also on the way, and an official-looking slide published by VR-Zone’s Chinese site suggests the new ultra-high-end platform will debut in the third quarter. According to the slide, there will be three models: the Core i7-4820K, 4830K, and 4860X. Here’s how the specifications stack up.

Model Cores Threads Base clock Turbo clock L3 cache TDP
Core i7-4820K 4 8 3.7GHz 3.9GHz 10MB 130W
Core i7-4830K 6 12 3.4GHz 3.9GHz 12MB 130W
Core i7-4860X 6 12 3.6GHz 4GHz 15MB 130W

The core counts and cache sizes are a perfect match for Intel’s existing Sandy Bridge-E processors. The turbo frequencies are almost identical, as well, but the base clock speeds have gone up by 100-200MHz. There’s also been a 266MHz jump in memory speed. Sandy Bridge-E officially supports memory up to 1600MHz, while Ivy-E appears to be primed for 1866MHz RAM.

All three processors have 130W thermal envelopes, which is the same for most of the CPUs in the Sandy Bridge-E family. Intel did add a 150W Core i7-3970X late last year, and it looks like the door is open for a similar product in the Ivy Bridge-E lineup. I’d rather see an eight-core model, though. Ivy Bridge-E silicon is rumored to have as many as 12 cores onboard, so products with half that many or less could be rather extremely hobbled.

Although the VR-Zone roadmap has Ivy Bridge-E CPUs appearing in the third quarter, the article text points to a November launch date. That fits with the latest rumors, which predict a November launch, and with the schedule for Sandy Bridge-E, which first appeared in November 2011.

Comments closed
    • LoneWolf15
    • 6 years ago

    The entire “E” platform has seemed so irrelevant to me. Different socket, much more costly, with performance benefits that help people in only certain sets of circumstances. I’ve felt this way for both the LGA-1366 and the LGA-2011 format desktop CPUs.

    As the enthusiast I am, Socket 1155 yields enough. If I needed more, a Xeon-based workstation would make more sense (yes, I know there’s socket overlap there, which is why I clarified “desktop” CPUs).

    I think it’s going to be awhile before I can justify replacing my i7-2600K. It will be easier to consider replacing my mobile Sandy Bridge laptop with Haswell, than upgrading my desktop, though that does pretty much everything I need it to as well.

      • Airmantharp
      • 6 years ago

      Yup. The real hard part is that the ‘maximum performance envelope’ for gaming isn’t going up much with successive generations.

      Once overclocked, Ivy isn’t any faster than Sandy, and Haswell doesn’t look to be much faster than that- maybe we get 5.0GHz reliably instead of ~4.5GHz with commonplace cooling? And will that make any difference with games in the next two years?

      We know the mobile side is where Intel is focusing- getting quad-core CPU’s into Ultrabooks and dual-cores into tablets, with reasonable real-world battery life is likely the best we’ll get out of Haswell, not that there’s anything to complain about!

    • Kougar
    • 6 years ago

    [quote<]and it looks like the door is open for a similar product in the Ivy Bridge-E lineup. I'd rather see an eight-core model, though. [/quote<] If they already have an X "Extreme" model, then that will be the flagship model for the next six months at the minimum. Most likely twelve. Intel waited a full year before releasing the 990X to replace the 980X. They also waited a full year before replacing the i7 3960X with the i7 3970X. Given Intel only keeps one $1K "eXtreme" part at a time, this seems to be solid evidence IB-E users won't be getting any extra cores any time soon.

    • michael_d
    • 6 years ago

    Xbitlabs had these news last Friday.

    Anyone thinks it is worth upgrading from Core i7 960? Perhaps better to wait for Haswell-E models?

      • HisDivineOrder
      • 6 years ago

      What? In two years from when IB-E shows up? By then, Intel will have given up on CPU’s you can install yourself. They’ll be selling them hardwired directly into the motherboard. Hell, by then, they’ll probably be telling everyone to go buy a NUC instead of a CPU.

    • ashleytehone039o
    • 6 years ago
    • Krogoth
    • 6 years ago

    I’m kinda surprised that Intel hasn’t attempted to release an Extreme Edition octal-core IB-E (16 threads) as cream of crop part for $1K+ tier.

      • Airmantharp
      • 6 years ago

      They may still; though it’s hard to expect someone who wants to spend maybe ~$2,000 on a CPU to take one that doesn’t come with ECC (not a Xeon).

        • Krogoth
        • 6 years ago

        Certainly not the epenis crowd or people who don’t know any better. 😉

        • Anonymous Coward
        • 6 years ago

        There are customers with more money than they know what to do with.

    • NeelyCam
    • 6 years ago

    How about the news on Haswell: integrated voltage regulator is supposedly broken. True or April Fools?

      • OneArmedScissor
      • 6 years ago

      Snake oil on the VRM!

        • HisDivineOrder
        • 6 years ago

        Snakes on a (voltage) plane!

          • NeelyCam
          • 6 years ago

          You win!

      • chuckula
      • 6 years ago

      TOTALLY TRUE! Everybody knows that everything Intel does fails miserably!

        • JustAnEngineer
        • 6 years ago

        Do you remember the Cougar Point chipset disaster from two years ago?

        How about the Pentium FDIV bug?

          • chuckula
          • 6 years ago

          TOTALLY! THEY WENT FROM PENTIUMS TO COUGARS TO THIS!

      • Kougar
      • 6 years ago

      hear they may be considering upgrading IB-E to thermal compound just like IB. Makes manufacturing so much cheaper over that old-fashioned fluxless solder that they’re going to give the consumers lower prices!

      • MadManOriginal
      • 6 years ago

      2 months out from launch for a product that’s already in production, and a product breaking bug crops up that would be found with even the most cursory testing (as in ‘Hey, this doesn’t boot…wtf’). Unlikely to be real.

        • NeelyCam
        • 6 years ago

        Yeah; I guess JMP pulled an April Fools:

        [url<]http://blogs.barrons.com/techtraderdaily/2013/04/01/intel-rebuffs-jmp-warning-about-haswell-on-track-and-on-time/[/url<]

    • mganai
    • 6 years ago

    Typo alert: they’re 4930K/4960X, not 4830K/4860X.

    Only other notable boost is the token quad core getting unlocked multiplier this time.

    Another thing all of this means is that 6 core and up CPUs won’t be trickling down to the mainstream market anytime soon.

      • HisDivineOrder
      • 6 years ago

      Why should they? Intel has no competition. We’re lucky Intel hasn’t done worse to us than this. This and nVidia’s slapping us around with Titan pricing is just more evidence AMD needs to stop screwing around and start competing. Problem is, I think they fired all their engineers…

      Now it seems all they got are products they’ve previously developed but delayed. Let’s hope they come up with more in due course…

        • Airmantharp
        • 6 years ago

        This. Intel has chosen to compete with themselves in the desktop and laptop space, and that’s probably the only real reason AMD hasn’t folded; Nvidia is profiting wildly off of selling half-Keplers up to $500, and the full version for $1000.

        And the hard part is, AMD still has some great technology, if only they refined it a little more. Their CPU’s seem like they could take over parts of the market if given some IPC tuning, especially in the multitasking/workstation space. Their GPU’s might actually be extremely competitive after they fix the smoothness problem and gain the performance boost from greater efficiency, instead of just going toe-to-toe with smaller Nvidia chips.

        And they could really benefit from split GPU/GPU compute strategies, like Nvidia does with their full-fat and half-size chips. Make a larger compute focused product that plays games well but really brings the goods on the compute side, and then make a smaller, compute-constrained but efficient product for the gaming crowd.

    • brute
    • 6 years ago

    K-K-X?

    does the X mean it comes with a cooler box?

    • Bensam123
    • 6 years ago

    Still stuck at six cores huh…

    I’m not really sure after using both a 3570k and a 8350, I can honestly say the subjective feel of the processors isn’t the same. While doing more then one thing the 8350 feels quite a bit smoother (such as streaming and gaming), it definitely feels like it has more then four cores. I really hope benchmarking starts looking more at the multi-tasking aspect of processors while doing other main tasks, such as gaming with a dozen or so other programs open in the background… Steam, email, a couple tabs in chrome, maybe a monitoring application, and chat programs (skype). Basically representative of a normal users workload.

    I stream so that’s a bit different from the average users workload, but it definitely heightens what I’m talking about as it’s essentially a heavily multithreaded workload that’s tied to FPS in game. Still waiting for a Tech website to pickup on the streaming craze and add something that reflects what streamers are looking for in hardware. It just takes one look at Twitch.tv to see all of them.

      • MadManOriginal
      • 6 years ago

      [url=https://techreport.com/review/23750/amd-fx-8350-processor-reviewed/9<]Gaming while transcoding video[/url<] isn't a good test of this? Maybe there are other things that affect streaming than the CPU like network connection. Video transcoding will use every bit of CPU resources it can. What else might be different about streaming that it's more CPU-intensive than video transcoding, or that the AMD architecture is that much more favorable? Does streaming rely on integer performance?

        • Bensam123
        • 6 years ago

        It actually hooks the game you’re playing in, so they’re a lot more dependent on each other (streaming does more then just encoding). I looked at that benchmark before and even commented on it in the comments section.

        It may seem similar, but it’s not the same thing.

        “What else might be different about streaming that it’s more CPU-intensive than video transcoding, or that the AMD architecture is that much more favorable? Does streaming rely on integer performance?”

        Good questions, hopefully a website can answer these when it takes a closer look at it.

          • Bensam123
          • 6 years ago

          Chuckula actually take time to write a response instead of f5ing the page to downvote my comments.

          Realtime downvoting is actually something pretty interesting to watch.

            • chuckula
            • 6 years ago

            1. You think very highly of yourself to think that I’d care that much.
            2. Why would I downvote what is possibly the greatest April Fool’s prank in TR history?

            Let me re-enact my reading of your post:
            Bensam123 sez “honestly” followed by by a bunch of words that imply (without expressly saying so) that he has used a 3570K… ROFLMAO! Like Bensam123 would ever debase himself by touching or being within 50′ of an Intel system.

            I admit that you almost had me there with your deadpan delivery that lacked any sense of humor or sarcasm, but I know what the expect when the calendar reads April 1!

            • Bensam123
            • 6 years ago

            You’re a troll and trolls most of all enjoy attention and pretending like they’re superior. As I’ve said before, you constantly throw out straw mans. Matter of a fact you’re doing it right now.

            Chuck what will you give me if I show you receipts? As I mentioned I talked about this off and on in the forums as I originally purchased a 3570k during the Thanksgiving sales, but then later (a few weeks ago) decided to change to a 8350 due to the 3570k not offering the experience I sought and having hunches about the 8350.

            The experience I was after wasn’t the absolute highest FPS, but the smoothest streaming experience. So far the 8350 has delivered this to me even though it’s a bit slower when it comes to single threads.

          • MadManOriginal
          • 6 years ago

          I didn’t say that streaming is the same thing as transcoding, just that transcoding is very CPU intensive.

            • Bensam123
            • 6 years ago

            Aye, similar, but not the same.

            …or are you implying you weren’t suggesting they were similar?

      • Waco
      • 6 years ago

      I’d bet good money that if I didn’t tell you which you were using you’d have trouble differentiating the two in normal use.

        • Bensam123
        • 6 years ago

        I’d bet that you haven’t used both a 3570k and a 8350 or really any two of IB/SB and BD/PD.

        That aside, you can tell you’re using a different processor because their is a FPS dip with the 8350 (depending on the game you’re playing).

        For instance in LoL I saw a loss in performance because the game is heavily single-threaded. I lost about 50-70FPS off a range of 150-300FPS. Where as with NS2 I saw a gain in FPS at lowends, such as big fights at the end of the game while streaming. Going from 25s with a 3570k@4.2 to a 35~fps with the 8350. The game feels better too.

          • Waco
          • 6 years ago

          You’d lose that bet. 🙂

          If you didn’t monitor FPS you wouldn’t see much of a difference except for very few circumstances. I owned an 8120 (albeit briefly) and hated it enough because one of those edge cases happened to be the ability to run The Binding of Isaac in fullscreen mode without being choppy. Even at nearly 5 GHz the 8120 was choppier than my lowly 3.5 GHz Phenom II X4 in that game. 😛

          In almost everything else it was essentially indistinguishable…as was my change to a 2600K. I convinced myself that the 2600K felt a lot faster but in reality I doubt any of the changes were really all that noticeable without some sort of measurement.

            • Bensam123
            • 6 years ago

            Did you try disabling core parking and were the thread scheduling patches for Windows 7 out at the time? Did you have them installed?

          • chuckula
          • 6 years ago

          There you go again claiming to have used both!! I like that you are persistent with the April Fool’s joke!

            • Bensam123
            • 6 years ago

            Do you need to see Newegg receipts? I even talked about my purchases over the last few months in the forums off and on.

      • NeelyCam
      • 6 years ago

      Ah – the good old “It just [i<]feels[/i<] faster" argument.

        • chuckula
        • 6 years ago

        I stole the smoothiness unit from BaronMatrix’s Phenom once…

        • Bensam123
        • 6 years ago

        Shit, that whole subjective argument. What are we going to do with people like this? It can’t possibly be right because people noticed it first instead of reading a number to start out with.

        It was either the change from the 3570k to a 8350 or going from a OC to a none OC. I had my 3570k@4.2, but I haven’t heard of that causing fluidity issues.

          • MadManOriginal
          • 6 years ago

          I know what it is – CORE PARKING!

            • Bensam123
            • 6 years ago

            Sir if you read the core parking thread you’d know Neely actually said it cleared up his flash player stutter problems.

            Although you like throwing this one out a lot, it’s starting to get old as there are a lot of threads online of people that have had micro-stutter issues cleared up by it.

            • MadManOriginal
            • 6 years ago

            It’s funny how you think I’m saying it to be mean.

            • Bensam123
            • 6 years ago

            I originally thought it was a joke, but you simply point this out whenever I mention anything relating to subjectivity so it’s starting to turn into something that’s mean, especially when there are people who still believe coreparking is snake oil.

            I guess if they understand what core parking is, this is definitely a joke, but if they’re simply looking to ride the bandwagon (which a lot of people in this thread are doing), then it offers a easy cope out to any sort of logical argument. This whole thread is riding on a perception biased (AMD can’t do anything better then the Intel related counterpart!) right now, so this doesn’t play out like a joke. I mean look at Chuckulas responses.

            • NeelyCam
            • 6 years ago

            I’m a believer of core parking..

            EDIT: Hmm.. maybe I should call it unparking instead… Parked cores suck

            • chuckula
            • 6 years ago

            Then I saw it park…
            NOW I’M A BELIEVER!

            • Bensam123
            • 6 years ago

            Aye, the name is completely unintuitive, especially when you start adjusting it.

          • NeelyCam
          • 6 years ago

          -1 for using “first” in a non-first post.

            • Bensam123
            • 6 years ago

            You’re perpetuating evil Neely… but that’s why I like you. Go work on those hormone treatments now, I want a Tiffany!

            • NeelyCam
            • 6 years ago

            Tiff is even more evil, in a careless dominatrix kind of a way

            • Bensam123
            • 6 years ago

            If you’re a thai lady boy I think I can handle it.

          • chuckula
          • 6 years ago

          Once again.. (giggle) you say that you have used a 3570k… You have WON April Fools Day!

            • Bensam123
            • 6 years ago

            So what are you basing your information on Chuck?

            • Bensam123
            • 6 years ago

            Seriously you guys, how do I get rated down for questioning a completely baseless statement full of hyperbole Chuck made? Also notice he didn’t respond to any of my responses regarding this?

            Silly geese.

      • End User
      • 6 years ago

      [quote<]Still stuck at six cores huh..[/quote<] Touting core count when comparing AMD to Intel is a waste of time. As it stands now a Vishera core is roughly 45% less powerful than a 3570k core. Vishera cores are going to be substantially weaker than those of Ivy Bridge-E. Ivy Bridge-E will destroy anything that AMD has to offer as far as outright performance is concerned no matter how you look at it. [quote<]I'm not really sure after using both a 3570k and a 8350, I can honestly say the subjective feel of the processors isn't the same. While doing more then one thing the 8350 feels quite a bit smoother (such as streaming and gaming), it definitely feels like it has more then four cores. I really hope benchmarking starts looking more at the multi-tasking aspect of processors[/quote<] TR has multi-tasking benchmarks and they don't back up your observations: [url<]https://techreport.com/review/23750/amd-fx-8350-processor-reviewed/9[/url<] "On the downside, the FX-8350's eight cores do not deliver a superior multitasking experience compared to the quad-core competition from Intel. Even the two-generations-old Core i5-760 is faster."

        • Bensam123
        • 6 years ago

        Madman also mentioned this, it’s not the same thing.

        I’m not arguing per core performance, don’t read it that way. I’m also not saying a 3570k is inferior to a 8350, don’t read it that way.

        I’m talking specifically about multi-threaded workloads, especially ones that are quite heavy on the multi-threading aspect (such as streaming). If you want to read why streaming is different then simply encoding a video in the background, read madmans response and mine to his at the top. Streaming does more then encode a video.

          • End User
          • 6 years ago

          What are you streaming?

          Edit: What is the CPU usage for your streaming task(s)?

            • Bensam123
            • 6 years ago

            It changes based on what I’m streaming. For LoL as I mentioned a couple times I lost about 50-75fps off of 150-300FPS, but overall it feels more fluid. Where as with NS2 I gained about 10fps at the low ends and it definitely feels smoother.

            The feel of the FPS doesn’t match the actual FPS, something’s off. Like I was getting 40fps in WoW the other day, which should be close to a seizure inducing slide show for me, but it didn’t feel anything like that. For the brief amount of time I got low FPS, it didn’t seem to make a difference at all.

            I’m talking about FPS and not CPU usage, because I don’t exactly remember the CPU usage. I believe OBS was using 18%~ roughly with a 3570k and it uses about 8-12% now with a 8350. That really depends entirely on what you’re streaming though. For LoL it’s always sitting at 8% with a 8350. That’s 720p/30fps/veryfast preset.

            Something I also noticed is increasing the strength of the encoder preset (which is how much it tries to encode the video) increases the gap. The 8350 handles something like a fast preset better then the 3570k does and of course even more so with a medium preset. Conversely when I reduce the preset down to something minimal like ultrafast there is virtually no difference between the 3570k and the 8350 in terms of CPU usage. So any advantage is largely a wash. I really can’t iterate how much smoother the 8350 feels though, regardless of preset.

            This may be more of a bias, but changing to a 144hz monitor definitely helped me notice these things. I didn’t even notice slight stutters and performance inconsistencies when I was on my 60hz monitor.

            • End User
            • 6 years ago

            Wait. Did you upgrade from a 3570K to a 8350 or is this a 2 system comparo running the same GPU and similar SSD/HDD?

            • Bensam123
            • 6 years ago

            This was a upgrade. The motherboards had slightly different features, but they were very similar.

            Asus P8Z77-V LK
            Asus M5A99FX Pro R2.0

            I swapped out the motherboard/memory/CPU and W7 redetected and reinstalled everything for me. I uninstalled drivers and such and cleaned up leftovers from the other motherboard, otherwise nothing changed. I was even using the same version of graphics drivers. It wasn’t a reinstall or repair.

            Although my SSD array died like two weeks ago, that’s a different story.

      • Airmantharp
      • 6 years ago

      My (G)od(s) people, stop beating him up for sharing his experience! Is it unreasonable for the FX-8350 to perform as it’s paper specifications promise for some real world workloads?

        • Bensam123
        • 6 years ago

        It’s okay Air, you may put the nail in my other hand, I forgive you.

        This is all part of changing an extremely heavy Intel bias. It’s almost entirely impossible for people to think a 8350 can actually do anything better then it’s Intel related counterpart.

        I’m not really discussing better single threaded performance as I mentioned it doesn’t have it and in some games, such as LoL I lose FPS (but when you’re playing at 150-300fps it doesn’t really matter). In others there is a gain, but subjectively it feels a lot more fluid while streaming.

        Maybe I got a bum 3570k? I could only hit 4.2 with it and core 3 kept having errors above that threshold unless I gave it a lot of juice, but I sorta doubt it.

          • NeelyCam
          • 6 years ago

          [quote<]It's almost entirely impossible for people to think a 8350 can actually do anything better then it's Intel related counterpart.[/quote<] I think the barrage of downthumbs is mostly because you're arguing your point based on "feelings" that directly contradict trustworthy third-party testing that's not based on hard numbers instead of feelings. Maybe you forgot to disable core parking on that 3570? Or are you saying that you overclocked both - just that you managed to overclock 8350 better? Those benchmarks were with stock clocks.

            • Airmantharp
            • 6 years ago

            But what if the scenario that he’s using hasn’t really been tested? I think that’s where he’s coming from here- he’s into streaming, and sees that a lot of other people are into streaming, and then noticed that a CPU that by all rights (as you say above, and I agree) should be slower is actually faster overall, and he wanted to share that experience.

            I think the down-thumbs are from people that aren’t willing to give him the benefit of the doubt in his explanation. He knows the ‘rule’ as well as the rest of us, and he took the time to show us an exception; it’s hard to believe that it’s being discouraged.

            • Bensam123
            • 6 years ago

            Yup. I know full well what the benchmarks of the 3570k are and that the 8350 is worse off in many cases and I’m not trying to disprove that, I’m simply offering my subjective view point which is unexplored and I believe more testing should be done in this area.

            It just turned out that the 8350 seems more ‘fluid’ even at worse frame rates under a heavy multi-threaded workload and that’s something unique, something that hasn’t been tested, and also something I highly value as a streamer.

            While it may not be applicable for the day to day user, depending on the results, it may be very relevant to power users, other people who multi-task a lot, and of course streamers. Streaming is unique in it’s own way (such as actually tying encoding in with a 3D workload), so it definitely makes this specific scenario stand out.

            If the 8350 offered a worse experience I had planned on eBaying it and losing the $50-75 on it before I sold my 3570k. I haven’t honestly found any notable downsides to it. The only glaring flaw is power efficiency, but I really don’t care about that. Windows performance seems more fluid too (I’m a heavy multi-tasker), but that’s highly subjective.

            The only other thing that may have botched my views is the OC I applied to my 3570k (putting aside biases). I have yet to OC my 8350, but really that shouldn’t matter.

            I just hope TR takes a look at this aspect before Haswell and Steamroller come out, which will completely wash any sort of comparison out. It definitely could make for some interesting results if TR applies it’s new methodologies to a streaming scenario, especially if a 120hz refresh is thrown into the mix. That I think could be particularly interesting.

            • Bensam123
            • 6 years ago

            There aren’t any tests for what I’m talking about and that’s what I’m arguing for. It subjectively feels different, so I believe more testing should be done in this area so we can either prove or disprove this.

            Feeling or subjective interpretation isn’t quantifiable, but it’s usually a good indication that something is different. “How different?”, “How much of a impact?”, “Is it simply a perception bias?”, are all things that should be answered.

            I’ve asked a few different times for TR to look at streaming performance because it’s definitely a different workload and the streaming segment is definitely growing for all different areas (not just gaming). It’s very applicable, yet there aren’t any websites that have focused on this, TR has yet to oblige as well.

            People here commonly mistake subjectivity for a factual flaw. If someone feels something is subjectively different and it differs from test results or even talk about subjective results in general (such as this thread), then that person is outright wrong. Sometimes testing doesn’t always measure what you’re looking to measure, so you have to find other ways to benchmark it (heck that’s the premise for the whole frame rating and frame time benchmarks). Other times the areas aren’t even explored and it serves as a good starting point for scientific exploration.

        • End User
        • 6 years ago

        Did you read his opening line? It shouts out BRING IT ON!

          • Bensam123
          • 6 years ago

          Six cores as in I was hoping for more for both Intel and AMD, just as Scott talks about SBE having something like 16 cores on board and 3/4 of them aren’t active. That should make anyone go :(.

          The second part was supposed to be a separate, but related piece of information and experience.

          Although I wasn’t against taking on the Intel zeitgeist in this thread, it wasn’t supposed to read like ‘Intel only has 6 cores while AMD has 8 cores, lololroflcopter, twomoar ‘coarz’ are better!’.

    • CaptTomato
    • 6 years ago

    I went from a single core AMD 3000 in 2005 to a dualcore e8400 in 2008, a beautiful power boost, same with my i3570 in 2013, HUGE performance, my new system is 3-4x faster than my old e8400/4gig ram+6850.

    I’m now waiting for the right combo of SW+HW that would enable a upgrade to a 6 or 8 core CPU, otherwise I’ll stay with IBridge for years.

      • OneArmedScissor
      • 6 years ago

      The problem is that something is either truly parallel, or it is a limited collection of single-threaded tasks. Example:

      Games are a mix of both scenarios. But GPUs [i<]already[/i<] handle the parallel part The tasks for CPUs have dwindled from including physics and AI (GPUs can do this), down to pretty much audio and the OS. Why add more cores for fewer tasks? Add more GPU! SoCs have also adopted the dual / quad-core + indefinitely expanding GPU paradigm. In the future, a hypothetical 5th and 6th core in mainstream parts will be extremely specialized GPU cores. Think QuickSync, HD decoders, 2D GPUs, the old PhysX cards...

        • CaptTomato
        • 6 years ago

        Sounds reasonable, either way, I got a huge boost from my new system, everything I do is significantly faster, as such, I’d need another huge boost from future HW to make me take the plunge…..this probably also applies to SB 2500/K/2600/K.

          • OneArmedScissor
          • 6 years ago

          I don’t doubt it. You jumped to double the cache clock speed, and an integrated memory controller, PCIe controller, and ring bus.

          In that department, the best is yet to come, like integrated RAM, which could make even phones crazy powerful.

          There is more to look foward to than ever, just not in the same old places.

            • CaptTomato
            • 6 years ago

            When games looks as good as Battlefield 4, they could stay that way for 10yrs I’d be happy, so I have my doubts any handheld or integrated device will do the job, maybe 5yrs time.

            • Airmantharp
            • 6 years ago

            You say that now :).

            We’ll never need more than 640k of RAM.

            • CaptTomato
            • 6 years ago

            I’m saying that Battlefield 4 looks incredible….it looks better on compressed ytube than Crysis3 does on my 26in 1920×1200 8 bit LCD……but it’ll probably want a Titan to max it, so how long before IGPU’s have that level of power…?

            • Airmantharp
            • 6 years ago

            ‘Max’ is quite relative- we can put 5 1080p or 3 1440/1600p displays together, and see what it takes;

            I’m at 2560×1600, and I can’t even ‘max’ BF3 in MP with a pair of GTX670’s and a 4.6GHz 2500k.

            But that’s not the point; yes BF4 looks great in the video, and I’m not saying that it won’t look that good in person, but that we still have a long, long way to go when it comes to real-time rendering at the consumer level.

          • moose17145
          • 6 years ago

          You might be waiting a while… I’m still running a socket 1366 system with an i7-920 I built in December of 2008 and I am still waiting for something I would consider a significant enough to upgrade.

            • CaptTomato
            • 6 years ago

            I generally leave it to intensive games or storms before I upgrade, so as long as you’re happy and operating, it’s all good.

            My i3570+7950 eats Crysis 3 at 1200p maxed with low AA.

            • CaptTomato
            • 6 years ago

            Both my CPU cores and Saphire Vapour X are currently idling at 30c, in a 23-25c room.
            I have the CM 212 with single fan on the CPU.

    • Anonymous Coward
    • 6 years ago

    I have a hard time believing that Intel would make such a small change. Not even an improvement in cache sizes. This is supposed to be a product for people who care about performance…

      • NeelyCam
      • 6 years ago

      Probably the main benefit is improved power efficiency. And no – TDP doesn’t tell the whole story.

        • Anonymous Coward
        • 6 years ago

        Improved power efficiency, maybe more time in turbo, improved IPC. Still this is really crappy, if they are just chopping off huge parts of the die like they did with the previous version.

      • OneArmedScissor
      • 6 years ago

      “Improvement” in cache size is relative. Lower latency benefits PCs. Larger caches raise latency.

      For PCs, we’ve been past the point of diminishing returns since Core 2, which had 3MB per core. Now 1.5MB is often ideal, so Intel makes most CPUs that way.

      Here’s a really extreme example – super low frame latency in a FPS:

      [url<]https://techreport.com/review/23750/amd-fx-8350-processor-reviewed/7[/url<] Look at how the regular i5 beats the regular i7, which beats the SB-E i7. The real reason the E parts have more L3 cache is to handle extremely high memory bandwidth in data centers.

        • Anonymous Coward
        • 6 years ago

        Yeah, but this is already a die with maybe 20MB L3, I don’t see that fusing some of it off will improve the latency of what is left.

          • OneArmedScissor
          • 6 years ago

          Their cache is arranged in “blocks” dedicated to each core, which varies from 1.5MB to 2.5MB. That’s up to a 66% increase from the i5, which could be a lot of latency.

          It’s 25 cycles vs. 30 cycles between just the i7s:

          [url<]http://www.anandtech.com/show/5091/intel-core-i7-3960x-sandy-bridge-e-review-keeping-the-high-end-alive/4[/url<] Since 2MB is a wash, Intel likely only builds the mainstream chips with that much for yield purposes. The quad-core SB-E is actually a different chip than the 6 / 8 core parts. It doesn't have anything fused off. The IB-E parts also must be a dedicated chip, as the full size Xeon has 12 cores. The quad-cores are likely die harvested, but they still have the full 2.5MB per core, so nothing fused off there.

            • Anonymous Coward
            • 6 years ago

            Sounds like we are depressingly close to an optimal general purpose processor given the constraints of silicon.

            • Airmantharp
            • 6 years ago

            We’ve been there for a while!

            Outside of specific workloads (that require, well, work), anything built in the last decade can run office apps and check email.

            In the last five years, it can run Windows 7 and all the latest applications.

            But given the constraints of silicon? Hardly. Intel and ARM are still pushing fabrication nodes at a rapid pace attempting to converge on the optimum balance of mobile performance and battery life, while Intel and AMD push the performance envelope of large-die designs in the commercial sector.

            And if AMD manages to put some performance pressure on Intel in the consumer market, Intel will be forced to start tuning for clockspeeds again instead of tuning just for performance per watt and adding bigger GPUs.

            • Anonymous Coward
            • 6 years ago

            [quote<]But given the constraints of silicon? Hardly.[/quote<] I'd say we must be pretty close to optimal if Intel can't be bothered to find something interesting to do for the top end processors. It appears they must at least have optimized the memory structure. The CPU core itself might have more room for wiggle but its pretty hard to tell if Intel is just being conservative and AMD is over-reaching, or if Intel really is occupying the one sweet spot of design.

    • Geonerd
    • 6 years ago

    Ugh! At this rate, people will be buying Steamroller out of sheer boredom.
    Here’s hoping that AMD can actually get the thing out the door by late 2013…
    Until then, Intel will continue to milk the cow (the consumers!) for all they can.

    • brucethemoose
    • 6 years ago

    Remember how the 2600k made the i7 970/980X pretty much irrelevant for most consumers?

    Why buy any of these when a haswell quad core will outpace it most of the time anyway?

      • Srsly_Bro
      • 6 years ago

      Some use their CPUs for more than just gaming.

        • Airmantharp
        • 6 years ago

        +1; but in the broad consumer and gaming perspective, is he really wrong?

        You’d need to have a pretty special application to justify paying another ~$300 to get a quad-channel Ivy over a dual-channel Ivy, in addition to the increased cost of the board and possible increase in memory cost.

        • brucethemoose
        • 6 years ago

        But most don’t 😉

        If you do need the extra muscle and can’t justify the cost of a Xeon, these make alot of sense. However, unlike the 990X, they hardly bring anything new to the table when compared to the previous generation.

      • moose17145
      • 6 years ago

      Actually, there are still a few programs out there that WILL run better / faster on the 1366 socket systems than on the newer 1155 sockets. The main reason being that the 1366 socket is a triple channel design for its memory. Granted the 1155 sockets can officially support faster memory speeds… but even still on average a 1366 socket system is going to have overall more memory bandwidth to play with than a socket 1155 system.

      For example I am still running a i7-920 system. My memory is only running 1066MHz right now even though the ram itself is rated for operation up to 1333 MHz (IE the stock speed that the 920 supports without OCing and such…). But since I have 50% more memory lanes than the 1155 socket (3 vs 2), that means that a 1155 socket system will have to be running its memory at 1,600 MHz just to keep up with what mine can do with pedestrian 1066 MHz memory. Then once you consider that most of the i7-9xx series supported 1333 MHz memory, that means that your memory would have to be running at 2,000MHz just to keep up with the stock ratings on most of the i7-9xx chips, even though those parts are now EOLed, having been replaced by the 2011 socket, which has 4 memory lanes! In which case you (more or less… for sake of keeping it simple) would have DOUBLE the speed of your memory on an 1155 system to have the same memory bandwidth as a 2011 system.

      Now granted… the program you are running needs to be primarily limited by memory bandwidth rather than raw processing power of the CPU, and those particular programs are far and few between in the end consumer world. But as have been mentioned by others, these are workstation chips / parts. And in workstation / server world… those applications that need copious amounts of memory bandwidth are much more common place. In fact I would be fairly certain that the SB-E models were already running into memory bandwidth issues and that Intel upped the supported memory speeds from 1600 to 1866MHz to help alleviate some of that. You CAN in fact buy a full blown 8 core Xeon. And believe me… it takes A LOT of memory bandwidth to keep all 8 of those hyper threaded cores fully fed, even when you are running programs that are not predominantly memory bandwidth intensive. If IB-E is going to be able to scale up to 12 cores, then the problem is only going to get worse.

      I think it was a few years ago where Intel said they were doing performance studies on multicore CPUs, and part of their findings basically showed that you wanted at least one memory channel per every two cores. Once you started having more than 2 cores per memory lane you started having issuing being able to keep the CPU fed with enough data to keep it busy. Again… you can partially alleviate that by simply having faster ram… but moving to faster memory also has it’s own drawbacks, such as increased latency / timings.

        • Airmantharp
        • 6 years ago

        Remember that while timings go up with memory clockspeed, latency is still relative. Timings tend to relax slower than clockspeeds go up with DDR3, so the higher speed modules wind up with similar or lower actual latencies in the end.

        That said, yeah, two cores per channel does sound about right, if all of the cores are doing real ‘work’. Thankfully there are faster memory standards available than DDR3.

    • colinstu12
    • 6 years ago

    I wonder if there will be any IPC increase between the 4830k and 3930k. There wasn’t with SB->IB so I’m sure it will be the same.

    Hopefully they don’t use the crappy TIM method between the die and IHS (like with IB) instead of traditional soldering (with SB).

    I wonder how well these new IB-E chips will overclock. Better? Same? Worse?

    No IPC increase and Same/worse OC performance… I’d go buy a used 3930k (to upgrade from my 3820).

    ~5% more IPC performance though… might be worth it. Especially if it can overclock easier.

      • Srsly_Bro
      • 6 years ago

      [url<]http://www.tomshardware.com/reviews/ivy-bridge-benchmark-core-i7-3770k,3181-24.html[/url<] Average performance advantage of 3770k over 2700k is 3.7% In your view 3.7% is nothing, and ~5% might be worth it. Are you sure you're talking about the right CPU?

        • chuckula
        • 6 years ago

        Oops! Sorry for that downthumb… I mistakenly thought you were saying that Haswell would only be 3.7% faster clock for clock than Sandy Bridge. N/M

          • Srsly_Bro
          • 6 years ago

          It’s okay. Down thumbs just mean people love me.

    • BIF
    • 6 years ago

    This is a big yawner!

    I have a Sandy Bridge E 6 core now. There is no motivating reason to upgrade to anything with fewer than 8 cores/16 threads.

      • Srsly_Bro
      • 6 years ago

      I agree. 8 cores or gtfo.

    • OneArmedScissor
    • 6 years ago

    Looks like they’re making the Xeon version as two different chips again, but now the entire PC line uses the “smaller” one (which is still huge).

    • sschaem
    • 6 years ago

    The i7-3830k was released in Q4 2011, and the refresh should appear by Q3-Q4 2013 ?!
    So we have a 2 year window for workstation CPU now ?! next refresh in Q3 – 2015 ?

    And what a lackluster refresh. Going from 6 core, 3.2ghz 130W, to 6 core, 3.4ghz 130w.
    (Same cache size) after 2 years of waiting 🙁

    The world of CPU is not changing… it had changed.

    Intel fused off 2 core in SB-E, and possibly 4 core in IB-E?!?!
    I truly wish the best of luck for AMD steamroller,
    this holding back of workstation compute power by Intel is just sad.

      • smilingcrow
      • 6 years ago

      But these aren’t workstation CPUs that would be Xeons. These are high end consumer chips derived from workstation chips. Still sucks for those in that market segment.

        • sschaem
        • 6 years ago

        You are right, LGA2011 and 1000$ CPU are considered desktop by Intel and they do recommend Xeon for all workstations. (even entry level) (using their workstation selector)

        I really thought xeon where intel server processors, and the E serie was their entry level workstation class platform…

        From what I can tell the same 6 core xeon model is 2x the price. That seem outrageous ?!
        Both LGA2011:

        i7-3930K – $540
        Xeon E5-1660 – $1100

        And its $1900+ for the 8 core model. I guess I will stay with consumer parts for my next workstation.

          • smilingcrow
          • 6 years ago

          Only the Xeons support ECC RAM but not using the desktop chipsets of course. 🙂

            • thecoldanddarkone
            • 6 years ago

            Actually the x79 chipset supports non registered ecc. However it’s up to the vendor to actually support it. My Asus p9x79 ws supports ecc. You need a xeon to use ecc dimms though.

            • smilingcrow
            • 6 years ago

            Thanks, I was thinking LGA1155 and not the x79 et al.

          • OneArmedScissor
          • 6 years ago

          E5-1660 = 3960X – both $1,000

          E5-1650 = 3930K – both $600

          Xeons do not cost more. They historically cost the same [b<]or less[/b<] for the same set of features. In recent years, Xeons have even been sold with more cores for less than their PC counterparts. They are not a better or worse deal. They just have lower clocks and you must pick what is right for you. For example, right now with Sandy Bridge EN Xeons, quad-cores start at $180, 6 cores starts at $380, and 8 cores starts at $1,000. But Intel isn't playing some kind of disingenuous marketing game here. You always get what you pay for. The afforementioned "discounted" SKUs are no good for anything except a multi-socket server.

            • smilingcrow
            • 6 years ago

            The cheapest LGA1155 4C/8T chip is the Xeon E3-1230 V2 @ $215.
            It lacks graphics though.

            • OneArmedScissor
            • 6 years ago

            EDIT: See below.

            • shizuka
            • 6 years ago

            Sorry to burst your bubble, but the E3-1225 doesn’t have hyper threading.

            [url<]http://ark.intel.com/products/52270/[/url<]

            • OneArmedScissor
            • 6 years ago

            Oops, you are right. I misread the extremely busy chart I was reading off of.

    • CaptTomato
    • 6 years ago

    Who would buy this with Lame Haswell round the corner?

      • sschaem
      • 6 years ago

      At $1000 for a 130w TDP 6 core 3.6ghz ivy-bridge, a few people will go for a 4 core haswell instead.

        • CaptTomato
        • 6 years ago

        Haswell doesn’t seem worth waiting for though.
        And for those who thumb me down, I’m a DESKTOP PC user, and from my POV, Haswell offers nothing meaningful over my i3570.

          • Airmantharp
          • 6 years ago

          Haswell doesn’t offer anything over my 2500k at 4.6GHz+, either. Well, PCIe 3.0 and faster integrated graphics, but overall nothing that I’d use.

            • NeelyCam
            • 6 years ago

            Yep. I would even argue that, for a desktop, Haswell doesn’t offer that much of an upgrade to anyone with a quadcore Lynnfield or newer

            • OneArmedScissor
            • 6 years ago

            I would be interested in seeing the “inside the second” test done on a Core 2 Quad vs. Nehalem vs. SB with the same clocks.

            I have a sneaking suspicion that even Core 2s hold up well with their ginormous and super low latency L2 cache.

            • Airmantharp
            • 6 years ago

            They will for some games, and then get clobbered by others.

            Battlefield 3 MP is my best example, as it stresses both GPU and CPU heavily, especially where it’s needed the most (close in firefights on big maps with lots of players). With the generally low clockspeeds of the the Quads (and of Lynnfield) along with difficult overclocking and slow DDR2, they are pretty well outmatched here.

            Now WoW or StarCraft II and the like? You’re fine.

            • Sahrin
            • 6 years ago

            This has been my dilemma ever since buying an i5 750. No processor has offered a compelling upgrade path – which is going to become a priority relatively soon if new standards become more important.

            The only ‘saving grace’ of Sandy and later is AVX. If Intel is ever able to convince developers to make widespread use of it, that could be important.

            • CaptTomato
            • 6 years ago

            Yes, but IGPU will still be slower than a say a 7850.

          • Farting Bob
          • 6 years ago

          Youve got a high end current generation CPU built on the same process node as the new haswells. I cant remember the last time the jump from 1 generation to the next in CPU’s was worth upgrading. Every 2-3 generations halves your cost while giving a more meaningful upgrade.

          Haswell will likely be much more worth it for those on slower SB or anything before it though.

          • HisDivineOrder
          • 6 years ago

          Well, there’s AVX2. And a decent performance gain regardless. Plus, perhaps they’ll start using fluxless solder again to return the chips to premier overclocking. A lower idle is always kinda nice for any PC, including desktops. It’s not essential.

          Plus, Haswell will force everyone to type all the letters. H, a, s, w, e, l, l. Ivy Bridge let us abbreviate it. Just think of all the keyboards that will slowly wear out and die over the course of people everywhere continuously typing, “Haswell.” Broadwell will be even better in this regard. Intel’s always lookin’ out for their keyboard manufacturer pen pals!

            • JustAnEngineer
            • 6 years ago

            Can we co-opt “HW” or should we use a more stylish “Ha” ?

            • Sahrin
            • 6 years ago

            I’m betting that “Has” becomes the dominant mode of reference.

            • NeelyCam
            • 6 years ago

            I think I’ll go with HW for HasWell; it works with IB and SB. And I think I’ll call the next ones BroadWell (BW) and SkyLake (SL)

            • CaptTomato
            • 6 years ago

            I could see myself buying a Haswell+SSD for a future notebook, but nothing exciting on the desktop.

    • Mr. Eco
    • 6 years ago

    Why are they not i7-3xxx, for Ivy Bridge chips?

      • Sargent Duck
      • 6 years ago

      Because that would be too easy. We can’t have computer hardware making sense can we?

        • thecoldanddarkone
        • 6 years ago

        Well that and because Sandy Bridge -E are already 3xxxx. lol

    • chuckula
    • 6 years ago

    L.A.M.E.

      • smilingcrow
      • 6 years ago

      Only 6 cores! People waiting for an upgrade will be disappointed.

Pin It on Pinterest

Share This