Rumor mill pegs Sandy Bridge E power draw at 180W

Perhaps this explains why we been hearing whispers about delays to Intel’s "Sandy Bridge E" processors. VR-Zone is reporting that Gulftown’s successor is a rather power-hungry beast. Although the chip is supposed to have the same 130W TDP rating as its predecessor, the actual power consumption is purportedly closer to 180W. The chip’s power requirements are apparently so demanding that Intel is instructing PSU makers to ensure that their units can supply at least 23A on the secondary 12V rail devoted to the CPU.

The article detailing those power requirements also suggests Intel will be shipping Sandy Bridge E CPUs sans heatsink. Gulftown chips are sold in retail boxes that include Intel coolers, so it’s unclear whether the new CPUs will only be sold as bare chips or if they’ll simply be squeezed into much smaller retail packaging. Intel is said to be producing its own cooler for the processors, though; it just won’t be shipped in the same box.

These rumors have yet to be confirmed, of course, but everything we’re hearing points to something pushing Sandy Bridge E past its original due date. All the mobo makers we talked to at Computex just a couple of months ago expected to have X79 motherboards primed for the new processors in the August/September time frame. Those same folks are being rather tight-lipped about the launch schedule now, however.

Comments closed
    • ish718
    • 8 years ago

    Eh, I guess AMD can’t be the only one with an 8 core desktop cpu.

    • OneArmedScissor
    • 8 years ago

    I have to wonder if the 180w is actually a mistake in reference to the full 8 core, 24MB L3 version, which would function very much like an overclocked Xeon E7-8837. That’s an existing Westmere-EX (32nm, ring bus, quad-channel IMC) with 8 cores, 24MB L3, 2.67 GHz base clock – and 130w TDP.

    Add a few hundred MHz to the cores, cache, and ring bus, integrate a PCIe 3.0 controller, and you could maybe get 150w…which was another rumor already going around, but specifically applying to that one bazillion dollar, server-bound configuration.

    To think that a more advanced chip which trades a few cores, tons of cache, and a multi-socket spider web for a few hundred MHz and 60w more heat is a bit silly.

    • smilingcrow
    • 8 years ago

    SB-E is fairly insignificant itself BUT seeing as it is in effect a workstation CPU repositioned for the desktop if this wild rumour is true the real issue for Intel would be for the Xeon range.

    You can’t beat a good rumour to get the fangirls on both sides bleating like sheep. Baaaaaaaa.

    • Arclight
    • 8 years ago

    [quote<]Rumor mill pegs Sandy Bridge E power draw at 180W[/quote<] Wasn't the rumour mill saying simillar things about Bulldozer a few months ago? I'd call it BS.

    • tygrus
    • 8 years ago

    Is the 180w the peak instant power required so the average over a second is more like 150w and the average over several minutes will be 130w once the chip is up to the upper boost/temperature threshold.
    Is Intel planning to increase the integrated GPU speed/performance which then uses more power to compete with AMD’s larger integrated GPU ?

    23A @12v = 276 watts.

      • NeelyCam
      • 8 years ago

      [quote<]23A @12v = 276 watts.[/quote<] True, but CPUs don't operate from 12V. DC/DC converters and regulators have losses, too

      • willmore
      • 8 years ago

      The -E parts have no integrated graphics, IIRC. So, that’s off the power budget. I’m really wondering where all this power is going.

        • NeelyCam
        • 8 years ago

        See post #13.

    • siberx
    • 8 years ago

    These rumors are a load of crock, or at least a vast misinterpretation of the given information.

    For starters, it’s unclear how an SB-E processor *could* consume this much power stock, considering its pedigree and predecessors; if a 980X has a 130W TDP on the same manufacturing process (that’s a triple-channel memory chip with a pile of cache and an older less efficient architecture) and an i7-2600k (similar core architecture to these new processors) has a TDP of 95W for a quad core then they would have to seriously screw up their memory controller and uncore elements for the consumption to rise so high. Adding a couple cores and memory channels to go from 95W-130W, or improving the architecture to add a single memory channel and maintain the same 130W TDP both make sense.

    Additionally, there’s a good reason that all desktop CPUs have been in the 65W-130W range for 6+ years (a surprising constant in an industry struck by a pervasive need for change in all aspects); it has been determined that the practical problems involved with powering and cooling a single chip at wattages any higher than this (for a standard consumer-oriented stock system, overclocked rigs don’t count due to the extra money invested powering and cooling them) is simply not financially sensible.

    Now, it’s *entirely possible* that if the chip were to run with everything enabled simultaneously with all cores at their maximum turbo boost then the chip would consume 180W; that’s exactly why intel *introduced* turbo mode; so that under lightly threaded scenarios they could extract extra power budget from their cores without exceeding TDP. This doesn’t mean the chip will, as sold, be set up in such a way as to be allowed to consume 180W under any circumstances.

      • willmore
      • 8 years ago

      Let’s see, compared with the 2600K:

      50% more cores (6)
      100% more memory interfaces (4)
      Higher Turbo frequency (+100MHz)
      High speed QPI interface (possibly two of them)
      100% more PCI-E lanes
      PCI-E 3.0 vs PCI-E 2.0 lanes–judging from the spec, it could be anywhere from 60% more power to over 200% more power per lane.

      So, will that push a 95W CPU past 130W? It very well could. I, for one, won’t discount these rumors out of hand. But, it’s clear that more information is needed. Hard facts would be nice.

    • Coran Fixx
    • 8 years ago

    Sandy II: The Wrath of Prescott

      • ClickClick5
      • 8 years ago

      HA!

      “Pres-hott”

      • willmore
      • 8 years ago

      I think you mean Tejas.

    • michael_d
    • 8 years ago

    No big deal as long as it translates into a significant performance increase.

    • End User
    • 8 years ago

    Cripes! That is only 10W less than the total TDP of my dual 6 core X5650 rig.

    • cybot_x1024
    • 8 years ago

    Whoa! There intel!

    Sounds like they glued two socket 1155 chips and threw them into one package!

    • Krogoth
    • 8 years ago

    I call shenanigans on these rumors.

    180W TPD is too much, even for prosumers. A 100-130W TPD seems to be more likely for the performance parts, while the energy-efficient stuff goes down further (55-90W).

    I don’t see why we should get all excited about Sandy Bridge-E. It is going to be a Sandy Bridge with more cache and cores. Neither of them are to going reap significant benefits for mainstream users and gamers. Just stick with the existing Sandy Bridge units or wait until Ivy Bridge comes out.

      • flip-mode
      • 8 years ago

      Wow, I find myself having to thumb-up a Krogoth post.

        • NeelyCam
        • 8 years ago

        …and you’re paying for it dearly.

          • flip-mode
          • 8 years ago

          Meh, I can’t always be on point.

        • paulWTAMU
        • 8 years ago

        Me too. It’s scary. What he said makes sense, and on reflection it seems improbably that 180 TDP would actually happen.

    • ronch
    • 8 years ago

    Take that, Prescott.

      • xeridea
      • 8 years ago

      At least it should see improved performance rather than Prescott’s inferior performance coupled with more heat/power, on a die shrink. Nice reminder of the crappy P4 days.

    • NeelyCam
    • 8 years ago

    My guess is that the high-current requirement is there to support some crazy Turbo settings that will draw a lot of current temporarily. I think VR-Zone’s conclusion is off-target – the “TDP” is what it’s always been, but Turbo will be much better than before.

      • tejas84
      • 8 years ago

      I agree this will be a balls to the wall CPU. The high end has to justify it’s existence what with APU’s like SB and Llano raising the bar for minimum CPU experience. Here’s to 180w SB-E and Bulldozer!

      • NeelyCam
      • 8 years ago

      Ok, so everybody disagrees with me on this. So, let’s make this “is NeelyCam right” moment.

      I predict that these CPUs are nowhere near 180W TDP. VR-Zone just misunderstood the meaning of that current requirement. This rumor is false.

    • just brew it!
    • 8 years ago

    Unless the mobo makers got a heads-up on this a few months ago, there may also be a lack of compatible motherboards at launch. They’ll all be scrambling to beef up their VRMs to handle this beast.

    That’s if it is true, of course. I’m inclined to wait for confirmation before believing this rumor.

    • ClickClick5
    • 8 years ago

    No mass haters? Everyone EXPLODED at AMD’s 140w TDP.

    I currently pull 380W from the wall when i’m gaming, and that is with a q6600 and a 6970.
    180w? Meh, skip.

      • jensend
      • 8 years ago

      Lots of Intel shills here have been saying both that AMD’s power consumption is unreasonable and that there’s no reason to care about Bulldozer because SB-E will be better. Yet somehow they don’t seem to be fazed at all by this. Do these folks really think a six-core 180W SB derivative will be better than an eight-core 95W BD by so much that using twice the power is entirely reasonable?

        • accord1999
        • 8 years ago

        Given how power-efficient S1155 Sandy Bridge is, it’s reasonable to give Intel the benefit of the doubt.

        • Mentawl
        • 8 years ago

        Yes πŸ™‚

        • chuckula
        • 8 years ago

        [quote<]Lots of Intel shills here have been saying both that AMD's power consumption is unreasonable and that there's no reason to care about Bulldozer because SB-E will be better.[/quote<] You raise 2 different points. 1. That AMD's power consumption is unreasonable. Compared to both Nehalem and Sandy bridge solutions that have been available since 2008 with performance that exceeds AMD's and lower power, this is a true assertion. This assertion is 100% unrelated to any evaluation of Sandy Bridge-E. Even if Sandy Bridge-E were completely cancelled tomorrow, it would still be true. 2. There's no reason to care about Bulldozer because SB-E will be better. For raw performance, SB-E will likely be better and even AMD wouldn't dispute that. However, Bulldozer will likely be better from a cost/performance perspective than the higher-end SB-E chips (the 3820 is actually going to be priced less than the 2600K so that chip may be an exception). Bulldozer does not have to beat high-end SB-E chips to win. Instead Bulldozer has to: 1. Be as good or slightly better than existing Sandy Bridge chips at the same price point and hopefully power envelope. and 2. Be profitable for AMD to make and sell at realistic price points. Right now, the jury is still out on performance, although Bulldozer should be good for perfectly threaded applications. The cost of manufacture and AMD's profitability are a huge question hanging in the air though. Bulldozer will definitely be a much bigger chip than the consumer-level SB chips already on the market, and it remains to be seen if Bulldozer will be able to turn a profit for AMD if Intel decides that an almost one-year old architecture is due for a round of price cuts.

          • jensend
          • 8 years ago

          AMD has been behind in power consumption, but claiming that 2008 Nehalem gear is still beating AMD’s latest is simply wrong. Llano is competitive with Arrandale and Clarkdale on perf/watt, and it’s ahead of 45nm Nehalems.

          Your claim that BD “will definitely be a much bigger chip” than SB is laughable. Everything I see says 8-core BD has roughly the same number of transistors as 4-core SB (~900m for BD and 996m for SB), and transistor density will be basically the same as well. If you’re trying to compare 8-core BD to 2-core SB you’re nuts. There will be 4-core BD variants; they’ll be roughly the same die size as the 2-core SB.

            • chuckula
            • 8 years ago

            [quote<]Everything I see says 8-core BD has roughly the same number of transistors as 4-core SB (~900m for BD and 996m for SB),[/quote<] That's flat out wrong unless you do not believe AMD's own marketing information. According to AMD: 0. A 4 core SB *including the graphics cores* is 995 million transistors, without the graphics cores it is 755 million transistors. 1. Each module (2 cores) including 2MB of L2 cache has 213 million transistors. 2. The 8 core versions of Bulldozer includes an additional 8 MB of L3 cache. 1 + 2 together is: 213 million transistors/module * 4 modules + 8 Megabytes at 8 bits/byte and 6 transistors/bit SRAM = 1.254 Billion transistors. This does NOT include memory controller, hypertransport, etc. etc. Unless you believe AMD is lying about the specs of its own chips, you have to think that Bulldozer is a bigger chip than the consumer SB chips by a fair margin. Not necessarily bigger than the higher-end SB-E chips, but BD is not targeting those chips either. Don't confuse the fact that Llano is only slightly larger than SB either. Llano includes a very large section of high density transistors in the graphics cores that Bulldozer does not have, so BD cannot pack the transistors in the same way that Llano can. In order to believe than an 8 core Bulldozer will be smaller than the 4 core sandy bridges, and even the 4 core SB-E that will not include graphics transistors, you have to also not take AMD at their word about what is actually in Bulldozer.

            • jensend
            • 8 years ago

            Bigger, quite possibly. But you said it would “definitely be a [i<][b<]much[/b<][/i<] bigger chip," and even if it were 30% bigger that doesn't qualify. But it won't be 30% bigger- you have to remember that the transistor density of cache is much higher than that of most of a CPU core, and quite a lot of BD's transistor budget is in its 2MB per-module L2. BD has 8 times as much L2 as SB and the same amount of L3. Because that L2 cache is so dense, each BD module's 231m transistors fit in just 31mm^2 (source: IEEE ISSCC presentation). The 8MB of L3 will certainly fit in less than 40mm^2 (people were doing 5.7mm^2/MB for L3 back on 45nm). So BD is ~160mm^2 plus uncore (NB, memory controller, &c). I don't have much of a clear idea how large the uncore will be; from what I understand Intel devotes around a third of the die area of SB to uncore i.e. 72mm^2, so let's suppose the BD uncore is 80mm^2. This gives us an estimate of 240mm^2 for BD- just 1 and 1/9 as big as 4-core SB (and we're getting twice the cores and 8x the L2). Doesn't sound like a much bigger chip to me.

          • shank15217
          • 8 years ago

          So you’re saying Sandy Bridge E would be even better than current Sandy Bridge processors in non perfectly threaded apps.

        • maroon1
        • 8 years ago

        Please delete

        • sschaem
        • 8 years ago

        Who said 95watt was unreasonable ?!

        How much faster do you think a 4 module bulldozer is VS a 4 core SB-E ? Its likely that they are balanced, both lose and win some.
        Both have the same TDP.

        But Intel will have offering with 50% more computing power (and 50% more peak consumption),
        AMD doesn’t have anything to match or beat that.
        Intel will have the ultimate CPU for workstations (they already do) and thats why they can charge 3x as much as AMD will.

      • Pettytheft
      • 8 years ago

      For home users it’s irrelevant. Large business it becomes a factor but I don’t get all the whining.

        • Kaleid
        • 8 years ago

        Nope, even the heat alone would make me skip these things. Max 125w or so… even for graphiccards.

        • jensend
        • 8 years ago

        [quote<]For home users [i<]who live in Siberia and are deaf[/i<] it's irrelevant.[/quote<]FTFY

      • paulWTAMU
      • 8 years ago

      It certainly worries me. Particularly given that heat and noise seem to be likely to accompany such a high TDP.

      • flip-mode
      • 8 years ago

      To be fair, AMD’s 140w TDP still lost to Intel’s 95w.

      • maroon1
      • 8 years ago

      Do you realize that this is a rumor ?

      There is NO PROOF that it consume 180w. When you show us a PROOF then we can speak. Let us wait for actual reviews before we blindly believe that it consume 180w

      • fantastic
      • 8 years ago

      Feeding the trolls…

      180 Watts is too much for any brand chip. Happy? I hope the rumors are false. It could mark the beginning of a bad trend.

        • ClickClick5
        • 8 years ago

        I hope so too. Indeed that type of power draw and heat output would need a big heatsink which would more or less mean a big fan. If the fan is a cheap one, you will have noise galore. A $29 fan, you would be happy….. this can quickly mean liquid cooling is in store.

        So yes, for Intel’s sake and the consumer, and the consumer’s wallet, lets hope/pray/wish this 180w is not true. If so, skip this chip.

    • tejas84
    • 8 years ago

    Holy Mother Teresa! That is some serious TDP. Poor AMD don’t have a chance in hell against this…

      • xeridea
      • 8 years ago

      But the power company will thank them.

    • flip-mode
    • 8 years ago

    Hoooooly cow.

    I have no interest in ever owning a 180 watt CPU.

    No SB-E for me (it wasn’t likely anyway).

    Now I just need to see the BD review and I’ll know if AMD has any future in my desktop or if it’ll be SB or IB instead.

      • tejas84
      • 8 years ago

      flip-mode buying any other brand than AMD… not in this lifetime

        • flip-mode
        • 8 years ago

        I average one lifetime every 3 years, so this one is about up anyway. And I already built myself a i5-2500K system for work. I’m extremely value driven whether it’s AMD/Intel or Radeon/Geforce or DDR2/DDR3 or Asus/Gigabyte/MSI/Asrock/Biostar. The only things I tend to be strongly against are trolls, shills, and fanboys.

    • chuckula
    • 8 years ago

    If these rumors are true this is very interesting. Considering that normal Sandy Bridge chips that have been out for almost 9 months have very good thermals and power consumption, then something weird must be happening with these newer chips.

    My guess is that the quad-channel memory controller and the massive amount of I/O crammed into these chips, along with the increased cache is what is at fault. On the low end, I’d be very surprised to see if the 3820 with 4 cores and 10MB L3 uses much more power than a 2600K. If the power consumption is much higher, then this clearly points to issues with the I/O instead of the CPU cores and cache themselves. When you get up to the high-end, the 15MB cache slapped on the 3960K could be a big contributor to the higher power draw.

    • maxxcool
    • 8 years ago

    *impressive* with a little overclocking 250watts should not be hard to hit. with a solid dual gfx card setup 500watts at the wall socket may become a norm…

      • sweatshopking
      • 8 years ago

      THAT’S WHAT I’VE BEEN WAITING FOR. I WANT A MINI NUCLEAR PLANT FOR MY CPU. THANK YOU TECH COMPANIES, MY DREAMS ARE COMING TRUE!!!!

      seriously though, who even wants this chip? by the time it launches, ib is just around the corner.

        • BestJinjo
        • 8 years ago

        1. IB is 9 months away
        2. IB is a quad core, not a hexacore
        3. IB is going to be a mild frequency refresh over the already power efficient 2500k/2600k. If you want significantly more horsepower, IB won’t be much better than those.
        4. 2500k/2600k already hit 4.5-4.6ghz overclocks. EVEN if IB can do 5.5ghz, that’s only a 20% increase in performance. SB-E offers at least 50% performance increase for users who need more horespower since it’s 6C/12T.
        5. If you scoff at the idea of 180W CPU processors then:

        1) You are not a power user at all, in which case a SB-E wasn’t intended for you
        2) You aren’t an enthusiast who overclocks your processor since any modern processor outside of 2500k easily smashes through 180W barrier in overclocked states.
        3) Considering that SB-E is mainly aimed at enthusiasts and the enterprise market, it makes sense that it doesn’t include an aftermarket cooler as most users will rely on third party coolers anyway due to better cooling performance and/or quieter operation.

        People seem to be very very confused about what IB is. IB is a refresh not a new microarchitecture. It is like Wolfdale to Conroe, Westmere to Nehalem/Lynnfield.

        If you want to wait for a new monster CPU, that’s Haswell in 2013. If you need a fast modern system, it makes no sense at all to wait for IB when you can get 2500k for $220.

          • NeelyCam
          • 8 years ago

          I don’t know why folks downthumbed you… Everything you say is solid.

          • sschaem
          • 8 years ago

          Benchmark hints that IB got serious FP unit optimization on the CPU side, will run OpenCL/DirectCompute 11 software on a new stream computing architecture, built on 22nm Trigate VS 32nm.
          So IB could more than double raw computing VS Sandy. (Cinebench leaked benchmark are already looking mighty impressive.)

          And who stop you from getting the 4 core SB-E vs a I7-2600k? The price difference might not be that big, maybe 50$. then later upgrade to an 8core IB-E…

          • xeridea
          • 8 years ago

          Right, since every “power user” must have their computer running @ 8 GHz just to survive. We all have 1500W PSU. There are many a “power user” who don’t need to be sucking 180W to do what they do. Enthusiast != person who sits around all day seeing if they can get their CPU to 10 GHz or stress out the local power substation, to save a few seconds of video transcoding, or 1 FPS in a game they already get 90 FPS on maxed out. Not saying this isn’t a power user, but not ALL enthusiast/power users would not benefit from a 180W @ stock/300W OC CPU. Its like the Nvidia approach, where they can make a faster GPU, but it is twice as big and uses 50% more power than it should.

            • maxxcool
            • 8 years ago

            You hurt my 1000watt / 136amp power supplies feelings…

            • xeridea
            • 8 years ago

            Not anything against big PSUs, or not minding high power draw, or super quad GPU setups, just saying that saying that anyone who scoffs at 180w stock power draw is not an enthusiast is asinine.

          • sweatshopking
          • 8 years ago

          i’ve overclocked every cpu i’ve had in the past half decade, and that’s a fair number.

          I’m aware of what ib is, i think you might be missing some info. it is not a wolfdale to conroe, it’s a much larger refresh than that. there are numerous enhancements, not the least of which is a dx11 gpu. i think you should google it a bit.

            • maxxcool
            • 8 years ago

            /Throws his gang sign/ OCfLife πŸ˜‰ Yeah, I have overclocked everything I have ever owned.
            dx4-66mhz to 75mhz
            dx4-100 to 120 mhz
            pentium 60 to 75mhz (WITH the DiV bug )
            Pentium 150 to 166mhz
            pentium 200mmx to 225mhz
            pentium ii 300c to 450mhz (That was a sweet cpu in the day)
            pentium iii 650 to 864mhz
            Amd thunderbird 1ghz to 1.3ghz with the graphite pencil trick
            Amd Athlon 1.33ghz to 1600mhz with the graphite pencil trick
            Intel 1.8ghz northwood p4 to 3.0ghz.
            Intel Dual core 2160 from 1.8ghz to 3.2ghz

            *current intel rig*
            i5-750 from 2.66ghz to 4.0ghz

            *current amd rig*
            1090t from 3.0ghz to 3.8ghz.

            Lovin’ every minute of it….

            • OneArmedScissor
            • 8 years ago

            I typed you a nice post…and then somehow hit the back button and wiped it. I’ll spare you the spam and just summarize/rant.

            Penryn was just about a technological miracle and redefined what we expect of computers, setting the standards of the last four years. We’re still using similarly clocked dual and quad-cores, with the same cache sizes, at the same prices, in the same types of computers.

            It was real bang for your buck and brought about some serious firsts, like mainstream quad-cores, sub-$100 3+ GHz dual-cores, quad-core laptops, and huge laptop battery life that is unmatched to this day. New high end CPUs are “faster,” but it’s largely intangible – and at the cost of battery life in laptops.

            So we’re still using computers the way that Penryn made possible. Ivy Bridge doesn’t change that. Yes, it has tri-gates, but everyone seems quick to forget that Penryn had high-k gates. We’ve seen that sort of leap in manufacturing before, but Penryn already did all the fun stuff. It was likely the last time we’ll see PCs just become outright better.

            However, Ivy Bridge is still interesting [i<]because[/i<] it's a shrink without a huge revamp. That is not something we have seen with modern, highly integrated CPUs, and I think it's a welcome change. Sandy Bridge hit the point of diminishing returns with almost literally everything, so it's time to get it up to the battery life we used to see. Reasonable expectations of Ivy Bridge would be i5s having the full cache enabled, i5 quad-cores for laptops, 35w TDP quad-cores, improved power gating for quad-cores that closes the battery life gap with dual-cores, increased battery life for video playback, and potential maximum battery life matching Core 2 ULV. The graphics won't match Llano and the IPC improvement will come mostly from lowering the memory subsystem's latency by outright shrinking it. It's all about laptops and will be very boring for desktops. Why else do you think it's socket compatible? :p The big thing is really standardized USB 3.0, which does potentially change what can be done with the average computer.

            • sweatshopking
            • 8 years ago

            I don’t disagree, but I would say that to say it’s “simply a shrink” is not accurate. there [i<] are [/i<] changes to the architecture.

            • NeelyCam
            • 8 years ago

            +1. If that was a rant, I’d really like to see what your “nice post” looks like…

        • maxxcool
        • 8 years ago

        Indeed, this is not a very exciting “tock”….

        • ClickClick5
        • 8 years ago

        Who wants this chip? Those who feel inclined to upgrade with every stepping. Ever. Period.

        These people do exist. It is scary.

    • Sencapri
    • 8 years ago

    Does anyone know how much those X79 boards will cost? and what is the main benefit between socket 1155 and 2011? quad vs dual channel ? 6 vs 4 cores? anything else?

      • Farting Bob
      • 8 years ago

      LGA2011 is essentially server chips aimed at the very high end enthusiast market as well. Its not really designed for them though. Intel seems to be gradually trying to merge the 2 markets together.

        • demani
        • 8 years ago

        That was my understanding- this will go into HP workstations, Dell workstations, and Mac Pros, with a handful thrown in enthusiast builds or the XPS line. This isn’t intended for the normal user AFAIK, so power is a different issue. It’s going into systems that are on 20a tech power circuits, or in server rooms.

        Still, that is going to be hot hot hot.

    • 5150
    • 8 years ago

    Even more reason to wait for Ivy Bridge.

      • BestJinjo
      • 8 years ago

      Ivy is a quad-core CPU aimed at Socket 1155. Your comment makes no sense since SB-E is a hexa-core 12 threaded CPU aimed at the workstation/enthusiast market. IB is going to be a $200-300 offering. People who need 6C/12T CPU could care less that it uses 180W. Besides an overclocked i7 920 uses > 300W at 4.0ghz. That didn’t stop people from buying those up in droves.

      Update: Here is latest pricing for SB-E hexacore:

      i7-3930K – 6 Core / 12 Threaded, 3.2ghz / 3.8ghz Turbo = $583 MSRP
      i7-3960X – 6 Core / 12 Threaded, 3.3ghz / 3.9ghz Turbo = $999 MSRP

      People who buy these either:

      1) Need the HP, so they won’t wait 9 months since to them time = $$$
      2) Will be able to easily afford upgrading to the latest and greatest every 12 months.

      So again, they’ll buy the next best thing Intel has in 12 months anyway.

        • 5150
        • 8 years ago

        Let me clarify:

        ……for me.

          • NeelyCam
          • 8 years ago

          Don’t you know that on TechReport discussions, everybody else knows what you need better than you do.

          • willmore
          • 8 years ago

          Can I just get a 2600K without the graphics? I’ll let you drop $50 on the price. πŸ™‚

            • NeelyCam
            • 8 years ago

            Yeah – isn’t bundling like this illegal in the European Union?

        • crabjokeman
        • 8 years ago

        [i<]Couldn't[/i<] care less.

Pin It on Pinterest

Share This